You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(22) 
_{Aug}
(22) 
_{Sep}

_{Oct}
(17) 
_{Nov}
(20) 
_{Dec}
(12) 
2015 
_{Jan}
(25) 
_{Feb}
(2) 
_{Mar}
(16) 
_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1

2

3

4

5

6

7

8
(8) 
9
(1) 
10
(1) 
11

12
(9) 
13
(4) 
14

15
(6) 
16
(1) 
17

18

19
(1) 
20
(7) 
21
(1) 
22
(2) 
23

24
(1) 
25
(3) 
26
(1) 
27

28




From: John Peterson <peterson@cf...>  20070226 16:35:22

Derek Gaston writes: > > For me, the most worthwhile optimization endeavor is parallel mesh. > All day long I talk about how great libMesh is to all my coworkers at > Sandia... and when I show them code snippets they go "wow" because of > how simple it is to get seemingly complicated things done (things that > would take a month to do in Sierra can be done in 10 minutes in > libMesh). But then the discussion _always_ comes back to: "But does > it do parallel mesh?"... and then I have to digress.... I agree. They aren't putting any fewer cores on CPUs these days and I believe we are heading toward a semicrisis in terms of the scalability of LibMesh on clusters with nodes that have 2 or 4 dualcore CPUs and only 8GB (or less) of system memory. However, if we start seeing more 16GB server nodes I don't think I'll be worrying nearly as much. John 
From: Roy Stogner <roystgnr@ic...>  20070225 17:53:50

On Sun, 25 Feb 2007, Roy Stogner wrote: > I've already made two big feature additions (AMR on > nonLagrange elements, and p refinement) You know, I forgot to think about how my optimization idea would interact with p refinement. That's another few bytes per object, plus it would make the global indexing calculations take about ten times longer. Consider also the 28 bytes per node of geometry and Node::key data, and my idea is sounding less and less worthwhile.  Roy 
From: Roy Stogner <roystgnr@ic...>  20070225 17:41:10

On Sun, 25 Feb 2007, Derek Gaston wrote: > This is mostly a subsystem change, but you have to consider that if > you want to bring outside FE developers in to help.... the more of > these things you do the harder it is for someone to get in there and > change things. That's a worry, yes. The biggest drawback of my idea is that it would require a new Enum of DofObject types, a new method like Elem::dof_object_type(node_num), and so it would give people one more thing to worry about when adding new geometric elements. Granted, we're not planning on adding more geometric elements any time soon, but we'll have to add a few eventually if we ever want to support p>2 on tets and prisms or even p>1 on pyramids. But I really think there's a lot of room to work within DofObject and DofMap. I've already made two big feature additions (AMR on nonLagrange elements, and p refinement) and one slight optimization (storing dof index ranges as first,count rather than as whole lists), and neither required any changes to the API or any DoF related changes to other code. I don't think this change would affect any code outside of the dof_map and dof_object files either. It's an incredibly wellisolated subsystem, especially considering how low level it is. > Keeping subsystems logically similar to how someone > new to the code would think about it is advantageous in the long run. > (but of course the whole dofobject thing is fairly complicated now > anyway, so a little added complication might not hurt ;) Bah; someone new to the code shouldn't be futzing around in the most low level systems anyway. I probably only got away with it because Ben wasn't watching the CVS logs closely enough. ;) The complication should all be hidden, too. The DofObject and DofMap APIs wouldn't have to change, just the underlying implementation. > Anyway, what you are proposing isn't going that far... I'm just trying > to illustrate some of the pitfalls of "over optimization". Oh, I understand being wary about it. > Essentially, if you were to take memory optimization to the max.... > you would end up with Sierra... and I think _none_ of us wants that! You know these mailing list messages get archived and Google indexed where managers can read them, right? ;) > For me, the most worthwhile optimization endeavor is parallel mesh. Agreed. Tweaking DofObject would probably save dozens of megabytes on medium sized problems, but parallelizing the mesh would save gigabytes on large problems. That's another endeavor I wish I had the time (and the MPI skills) for. Ben was sounding more motivated about it the last time he was in Austin, though; perhaps once he's got his defense out of the way we can goad him into action.  Roy 
From: Derek Gaston <friedmud@gm...>  20070225 17:10:47

The idea definitely sounds promising, but I will caution against unnecessary complication in the name of memory optimization. In Sierra no Mesh object (node, element, face etc..) knows its own type (MeshObj is essentially a typeless container)... therefore the code itself must keep track of which type is which and sort it out when necessary. At first this sounds like a great idea (MeshObj are really light... and really general) until you actually start implementing in code... and you end up dynamic casting all over the place to figure out what is what. It makes things that should be relatively simple _much_ more difficult (like seeing if a point lies within an element... I'll give you an example of this on Monday). This is mostly a subsystem change, but you have to consider that if you want to bring outside FE developers in to help.... the more of these things you do the harder it is for someone to get in there and change things. Keeping subsystems logically similar to how someone new to the code would think about it is advantageous in the long run. (but of course the whole dofobject thing is fairly complicated now anyway, so a little added complication might not hurt ;) Anyway, what you are proposing isn't going that far... I'm just trying to illustrate some of the pitfalls of "over optimization". Essentially, if you were to take memory optimization to the max.... you would end up with Sierra... and I think _none_ of us wants that! For me, the most worthwhile optimization endeavor is parallel mesh. All day long I talk about how great libMesh is to all my coworkers at Sandia... and when I show them code snippets they go "wow" because of how simple it is to get seemingly complicated things done (things that would take a month to do in Sierra can be done in 10 minutes in libMesh). But then the discussion _always_ comes back to: "But does it do parallel mesh?"... and then I have to digress.... Derek On 2/24/07, Roy Stogner <roystgnr@...> wrote: > > I'm never going to find time to implement this myself, but I thought > I'd throw it out there in case anyone else is interested: > > > With reference counting off, DofObjects in libMesh currently have: > > DofObject *old_dof_object > int _id > short int _processor_id > char _n_systems > (probably 1 byte alignment padding here) > unsigned char *_n_vars > unsigned char **_n_comp > unsigned char **_dof_ids > > That's 24 bytes on a 32 bit system, 40 bytes on an LP64 bit system. > > But, the pointers are all pointing to things allocated on the heap... > does anyone know how much memory management overhead we get with g++ > and the default new()? > > Even with 0 overhead, we still end up with at least an additional byte > and two pointers per system, plus a byte and an int per variable... > and then the old_dof_object doubles that. For Vikram's code, as an > example, with 3 systems and up to 5 variables on each node, that's > something like 152 bytes on a 32 bit system or 232 bytes on an LP64 > system. > > > So here's my thought: > > What if we have this for each DofObject: > > int _id > short int _processor_id > char _dof_object_type > char _old_dof_object_type > int _per_type_id > int _old_per_type_id > > And this for each DofMap: > std::vector<int> dof_renumbering(n_dofs) > > That's 16 bytes per DofObject, plus 4 bytes per variable (totaling up > to 36 bytes per node in Vikram's code), with no allocator overhead. > > The _dof_object_type tells the DofMap what kind of object it is: Elem, > vertex, hanging vertex/edge, hanging vertex/face, edge, etc. Likewise > for the _old_dof_object_type. The _per_type_id numbers each type of > DofObject from 0 to (e.g.) n_vertices; likewise for _old_per_type_id. > > The "raw" dof numbering then goes in order by object type: all the > vertices first, all the edges next, all the faces third, etc. With > isotropic refinement, there should only be a dozen or two types of > objects. > > Because n_comp is the same for DofObjects of the same > _dof_object_type, to get the dof numbering for a variable on a DoF > object you just add: for the v variable on an edge in a u/v/p system, > for example, the raw global DoF index is: > > raw_global_index = n_u_dofs + n_v_vertices*n_v_dofs_per_vertex + > _per_type_id*n_v_dofs_per_edge + edge_local_index > > And the global DoF index is dof_renumbering[raw_global_index] > (which is necessary, most importantly to keep DoFs on each processor > contiguous) > > > > This is probably way too much work to save what may add up to only a > hundred megs of RAM... but it seems in my experience that Mesh > objects are taking more memory than they should, and the first suspect > IMHO is the DofObject class. >  > Roy > >  > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveysand earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel > 
From: Roy Stogner <roystgnr@ic...>  20070224 20:18:39

I'm never going to find time to implement this myself, but I thought I'd throw it out there in case anyone else is interested: With reference counting off, DofObjects in libMesh currently have: DofObject *old_dof_object int _id short int _processor_id char _n_systems (probably 1 byte alignment padding here) unsigned char *_n_vars unsigned char **_n_comp unsigned char **_dof_ids That's 24 bytes on a 32 bit system, 40 bytes on an LP64 bit system. But, the pointers are all pointing to things allocated on the heap... does anyone know how much memory management overhead we get with g++ and the default new()? Even with 0 overhead, we still end up with at least an additional byte and two pointers per system, plus a byte and an int per variable... and then the old_dof_object doubles that. For Vikram's code, as an example, with 3 systems and up to 5 variables on each node, that's something like 152 bytes on a 32 bit system or 232 bytes on an LP64 system. So here's my thought: What if we have this for each DofObject: int _id short int _processor_id char _dof_object_type char _old_dof_object_type int _per_type_id int _old_per_type_id And this for each DofMap: std::vector<int> dof_renumbering(n_dofs) That's 16 bytes per DofObject, plus 4 bytes per variable (totaling up to 36 bytes per node in Vikram's code), with no allocator overhead. The _dof_object_type tells the DofMap what kind of object it is: Elem, vertex, hanging vertex/edge, hanging vertex/face, edge, etc. Likewise for the _old_dof_object_type. The _per_type_id numbers each type of DofObject from 0 to (e.g.) n_vertices; likewise for _old_per_type_id. The "raw" dof numbering then goes in order by object type: all the vertices first, all the edges next, all the faces third, etc. With isotropic refinement, there should only be a dozen or two types of objects. Because n_comp is the same for DofObjects of the same _dof_object_type, to get the dof numbering for a variable on a DoF object you just add: for the v variable on an edge in a u/v/p system, for example, the raw global DoF index is: raw_global_index = n_u_dofs + n_v_vertices*n_v_dofs_per_vertex + _per_type_id*n_v_dofs_per_edge + edge_local_index And the global DoF index is dof_renumbering[raw_global_index] (which is necessary, most importantly to keep DoFs on each processor contiguous) This is probably way too much work to save what may add up to only a hundred megs of RAM... but it seems in my experience that Mesh objects are taking more memory than they should, and the first suspect IMHO is the DofObject class.  Roy 
From: Travis Austin <austint73@gm...>  20070222 00:52:43

Please note that you must have a recent version of petsc (2.3.x) that is compiled with HYPRE support. Earlier versions of petsc do not make room for HYPRE use. On 2/22/07, Roy Stogner <roystgnr@...> wrote: > > On Thu, 22 Feb 2007, Travis Austin wrote: > > > I've tested my patch by doing a 'make run_examples' using > > petsc2.1.6 and petsc2.2.0. All of the examples run fine and > > everything seems to be OK. The more recent petsc that I have tested > > against was petsc2.3.2p6. > > > > I used default compilation in terms of OPT or DBG. Are there other > > tests to run? > > The test coverage we get with "make run_examples" is abysmal, but > it would have caught any obvious errors with those PETSc versions, > and I haven't noticed any subtle errors with petsc2.3.1. I'll commit > your patch (with a slight assert() reordering John suggested) to CVS > tonight; I don't think it'll break anyone else's code either. > > For those who want to experiment with algebraic multigrid, try the > latest CVS. The options "pc_type hypre pc_hypre_type boomeramg" > seem to be a good start. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20070222 00:36:08

On Thu, 22 Feb 2007, Travis Austin wrote: > I've tested my patch by doing a 'make run_examples' using > petsc2.1.6 and petsc2.2.0. All of the examples run fine and > everything seems to be OK. The more recent petsc that I have tested > against was petsc2.3.2p6. > > I used default compilation in terms of OPT or DBG. Are there other > tests to run? The test coverage we get with "make run_examples" is abysmal, but it would have caught any obvious errors with those PETSc versions, and I haven't noticed any subtle errors with petsc2.3.1. I'll commit your patch (with a slight assert() reordering John suggested) to CVS tonight; I don't think it'll break anyone else's code either. For those who want to experiment with algebraic multigrid, try the latest CVS. The options "pc_type hypre pc_hypre_type boomeramg" seem to be a good start.  Roy 
From: Roy Stogner <roystgnr@ic...>  20070221 04:16:39

Just add ".bz2" to the end of any filename you hand to the basic EquationSystems/Mesh read()/write() methods, and libMesh should now handle uncompressing of input files and compressing of output files on the fly. I say "should", because currently the mechanism is to call system("bzip2 blah"), creating a temp file with the .bz2 part of the extension stripped off. This will wipe any file which currently has that name, it won't work on systems without bzip installed, it shouldn't work if you don't have rwx permissions on the directory you're reading files from, it probably won't work if you use spaces or other shellinterpreted special characters in your filenames, and it may break if you look at it funny. It works for me, though, and it's useful, so I'm committing to CVS in case anyone wants to use it now and/or make the implementation more robust later.  Roy 
From: Roy Stogner <roystgnr@ic...>  20070220 14:26:57

On Tue, 20 Feb 2007, Roy Stogner wrote: > The gripping hand is it's not a .xda/.xdr problem at all. > Uncommenting equation_systems2.reinit() causes the state to remain > unchanged, whereas a call to equation_systems.reinit() causes the > same sort of slight state change before any files are even written. The change appears to be occuring in System::project_vector  but with no elements flagged for refinement or coarsening, that function should just be copying degrees of freedom, not doing anything to change them. I can't figure out yet exactly where anything's being changed  DofMap::enforce_constraints_exactly() gets called at the end of the projection now, but that shouldn't have any effect on a uniform mesh.  Roy 
From: <tim@ce...>  20070220 14:02:59

Dear John, On Tue, 20 Feb 2007, John Peterson wrote: > If we write out only 12 digits of the solution (which I think is true > for xda) could this explain the discrepancy? Yes, but I'm using xdr, not xda (see source code). Best Regards, Tim 
From: Roy Stogner <roystgnr@ic...>  20070220 14:01:39

On Tue, 20 Feb 2007, Tim Kröger wrote: > Hmm. Just to confuse you, I wrote the attached test program. It creates a > grid, solves a simple PDE on it, solves it twice again, then writes > everything to xdr files, reads them in and solves again. Obviously (from the > attached output, produced with ksp_monitor), reading the files in does not > exactly restore the original state. Now that's a tough one. On the one hand, the original state should be restored exactly in your example... I suppose there could be come data that's sitting in cache at 80 bits but gets reread at 64 bits, but I doubt it. On the other hand, I'm finding it hard to get too excited about a state change that's smaller than most linear solver tolerances. The gripping hand is it's not a .xda/.xdr problem at all. Uncommenting equation_systems2.reinit() causes the state to remain unchanged, whereas a call to equation_systems.reinit() causes the same sort of slight state change before any files are even written.  Roy 
From: Kirk, Benjamin \(JSCEG\) <benjamin.kirk1@na...>  20070220 13:58:04

I am too lazy to do this myself  what if you write/read an XDR file instead? It should store a double as a true double and hopefully bypass this issue. Original Message From: libmeshdevelbounces@... [mailto:libmeshdevelbounces@...] On Behalf Of John Peterson Sent: Tuesday, February 20, 2007 7:51 AM To: Tim Kroger Cc: libmeshdevel@...; Roy Stogner Subject: Re: [Libmeshdevel] Two items on my wishlist Hi Tim, If we write out only 12 digits of the solution (which I think is true for xda) could this explain the discrepancy? The residual norm computed from the readin solution looks the same to within about 12 digits. John Tim Kr=1B,bvger writes: > Dear Roy, > > On Mon, 19 Feb 2007, Roy Stogner wrote: > > > My current work with another libMesh user has made me realize that we > > need: > > > > 1. Support for taking .xda/.xdr files which have been written with > > one setting of ENABLE_INFINITE_ELEMENTS and reading them with the > > opposite setting. Currently this seems to break. > > > > However, my recent experiences fixing the "scrambled DoF indices with > > nonLagrange AMR restarts" bug and the "not writing .xda files with > > enough precision" bug have made me realize the importance of: > > > > 2. Me never having to read through the xdr* source code again. > > > > I'd love to dive right in and implement my wishlist item 1, but > > unfortunately that conflicts with wishlist item 2. My hands are tied. > > Hmm. Just to confuse you, I wrote the attached test program. It > creates a grid, solves a simple PDE on it, solves it twice again, then > writes everything to xdr files, reads them in and solves again.=20 > Obviously (from the attached output, produced with ksp_monitor), > reading the files in does not exactly restore the original state.=20 > However, I guess that changing this behaviour, even if it could become > an item 3 of your wishlist, would still conflict with item 2. > > Best Regards, > > Tim./test ksp_monitor > 0 KSP Residual norm 1.733328971787e+01=20 > 1 KSP Residual norm 1.185778941653e+01=20 > 2 KSP Residual norm 1.139134743178e+01=20 > 3 KSP Residual norm 9.913079902632e+00=20 > 4 KSP Residual norm 7.191145025915e+00=20 > 5 KSP Residual norm 5.172029883955e+00=20 > 6 KSP Residual norm 2.147425476520e+00=20 > 7 KSP Residual norm 8.687575748560e01=20 > 8 KSP Residual norm 4.164173667107e01=20 > 9 KSP Residual norm 2.443014766559e01=20 > 10 KSP Residual norm 1.888013690466e01 > 11 KSP Residual norm 1.545632671036e01 > 12 KSP Residual norm 9.547251214559e02 > 13 KSP Residual norm 3.558309380772e02 > 14 KSP Residual norm 2.119986479829e02 > 15 KSP Residual norm 1.472055648826e02 > 16 KSP Residual norm 7.844202526193e03 > 17 KSP Residual norm 4.324782781627e03 > 18 KSP Residual norm 2.537192913599e03 > 19 KSP Residual norm 1.372628313957e03 > 20 KSP Residual norm 7.072639404731e04 > 21 KSP Residual norm 2.981094657572e04 > 22 KSP Residual norm 1.807779624842e04 > 23 KSP Residual norm 1.340784951662e04 > 24 KSP Residual norm 1.088577505522e04 > 25 KSP Residual norm 7.207500151591e05 > 26 KSP Residual norm 3.887867139016e05 > 27 KSP Residual norm 2.190821106975e05 > 28 KSP Residual norm 1.040052415721e05 > 29 KSP Residual norm 3.856840518033e06 > 30 KSP Residual norm 1.989373498593e06 > 31 KSP Residual norm 1.442089144671e06 > 32 KSP Residual norm 8.398949810780e07 > 33 KSP Residual norm 4.613052610992e07 > 34 KSP Residual norm 2.446011262135e07 > 35 KSP Residual norm 1.402013920665e07 > 36 KSP Residual norm 9.288528753379e08 > 37 KSP Residual norm 5.569095323071e08 > 38 KSP Residual norm 3.080619009371e08 > 39 KSP Residual norm 2.149324393284e08 > 40 KSP Residual norm 1.668599996685e08 > 41 KSP Residual norm 1.297303487430e08 > 42 KSP Residual norm 9.127468731142e09 > 43 KSP Residual norm 6.183347158799e09 > 44 KSP Residual norm 4.688102496189e09 > 45 KSP Residual norm 4.418529129665e09 > 46 KSP Residual norm 4.266320304576e09 > 47 KSP Residual norm 4.097419313482e09 > 48 KSP Residual norm 3.918923888652e09 > 49 KSP Residual norm 3.395719439036e09 > 50 KSP Residual norm 2.538813185638e09 > 51 KSP Residual norm 1.415015667448e09 > 52 KSP Residual norm 6.308264312374e10 > 53 KSP Residual norm 3.843345277332e10 > 54 KSP Residual norm 2.514734506456e10 > 55 KSP Residual norm 1.710479832166e10 > 56 KSP Residual norm 1.151796274421e10 > 57 KSP Residual norm 7.714264551552e11 > 58 KSP Residual norm 4.742723602833e11 > 59 KSP Residual norm 3.379626986680e11 > 60 KSP Residual norm 2.548253614890e11 > 61 KSP Residual norm 2.095190091567e11 > 62 KSP Residual norm 1.512617662327e11 >  > 0 KSP Residual norm 1.510050418779e11=20 >  > 0 KSP Residual norm 1.510050418779e11=20 >  > 0 KSP Residual norm 1.611365184359e11=20 > #include <iostream> > #include <algorithm> > #include <math.h> > > #include "libmesh.h" > #include "mesh.h" > #include "mesh_generation.h" > #include "gmv_io.h" > #include "equation_systems.h" > #include "fe.h" > #include "quadrature_gauss.h" > #include "dof_map.h" > #include "sparse_matrix.h" > #include "numeric_vector.h" > #include "dense_matrix.h" > #include "dense_vector.h" > #include "linear_implicit_system.h" > #include "transient_system.h" > #include "perf_log.h" > #include "boundary_info.h" > #include "utility.h" > #include "elem.h" > #include "mesh_refinement.h" > > void assemble(EquationSystems& es, const std::string& system_name) > { > const Mesh& mesh =3D es.get_mesh(); > const unsigned int dim =3D mesh.mesh_dimension(); > LinearImplicitSystem& system =3D = es.get_system<LinearImplicitSystem> ("e"); > const DofMap& dof_map =3D system.get_dof_map(); > FEType fe_type =3D dof_map.variable_type(0); > AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); > QGauss qrule (dim, FIFTH); > fe>attach_quadrature_rule (&qrule); > const std::vector<Real>& JxW =3D fe>get_JxW(); > const std::vector<std::vector<Real> >& phi =3D fe>get_phi(); > const std::vector<std::vector<RealGradient> >& dphi =3D fe>get_dphi(); > DenseMatrix<Number> Ke; > DenseVector<Number> Fe; > std::vector<unsigned int> dof_indices; >=20 > MeshBase::const_element_iterator el =3D mesh.elements_begin(); > const MeshBase::const_element_iterator end_el =3D mesh.elements_end(); > for ( ; el !=3D end_el ; ++el) > { > const Elem* elem =3D *el; > dof_map.dof_indices (elem, dof_indices); > fe>reinit (elem); > Ke.resize (dof_indices.size(), > dof_indices.size()); > Fe.resize (dof_indices.size()); >=20 > for (unsigned int qp=3D0; qp<qrule.n_points(); qp++) > for (unsigned int i=3D0; i<phi.size(); i++) > { > for (unsigned int j=3D0; j<phi.size(); j++) > { > Ke(i,j) +=3D JxW[qp]*(dphi[i][qp]*dphi[j][qp]+phi[i][qp]*phi[j][qp]); > } > for (unsigned int i=3D0; i<phi.size(); i++) > { > Fe(i) +=3D JxW[qp]*1.0*phi[i][qp]; > } > }=20 > =20 > system.matrix>add_matrix (Ke, dof_indices); > system.rhs>add_vector (Fe, dof_indices); > } > } > > int main (int argc, char** argv) > { > libMesh::init (argc, argv); > { =20 > const unsigned int dim =3D 3; =20 > Mesh mesh (dim); > =20 > MeshTools::Generation::build_cube (mesh, > 4, 4, 4, > 0., 1., > 0., 1., > 0., 1., > HEX27); > MeshRefinement mesh_refinement(mesh); > mesh_refinement.uniformly_refine(1); > EquationSystems equation_systems (mesh); > equation_systems.add_system<LinearImplicitSystem> ("e"); > equation_systems.get_system("e").add_variable("u", SECOND); > equation_systems.get_system("e").attach_assemble_function (assemble); > equation_systems.init(); > equation_systems.get_system("e").solve(); >=20 > std::cout << "" << std::endl; >=20 > equation_systems.get_system("e").solve(); >=20 > std::cout << "" << std::endl; >=20 > equation_systems.get_system("e").solve(); >=20 > std::cout << "" << std::endl; >=20 > mesh.write("mesh.xdr"); > equation_systems.write("systems.xdr", > libMeshEnums::WRITE, > EquationSystems::WRITE_DATAEquationSystems::WRITE_ADDITIONAL_DATA); >=20 > Mesh mesh2(dim); > mesh2.read("mesh.xdr"); > EquationSystems equation_systems2(mesh2); > equation_systems2.read("systems.xdr", > libMeshEnums::READ, > EquationSystems::READ_DATAEquationSystems::READ_ADDITIONAL_DATAEquatio nSystems::READ_HEADER); > equation_systems2.reinit(); > equation_systems2.get_system("e").attach_assemble_function (assemble); > equation_systems2.get_system("e").solve(); > } > =20 > return libMesh::close (); > } > >   > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveysand earn cash > http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V_______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: John Peterson <peterson@cf...>  20070220 13:51:50

Hi Tim, If we write out only 12 digits of the solution (which I think is true for xda) could this explain the discrepancy? The residual norm computed from the readin solution looks the same to within about 12 digits. John Tim Kr,bv(Bger writes: > Dear Roy, > > On Mon, 19 Feb 2007, Roy Stogner wrote: > > > My current work with another libMesh user has made me realize that we > > need: > > > > 1. Support for taking .xda/.xdr files which have been written with > > one setting of ENABLE_INFINITE_ELEMENTS and reading them with the > > opposite setting. Currently this seems to break. > > > > However, my recent experiences fixing the "scrambled DoF indices with > > nonLagrange AMR restarts" bug and the "not writing .xda files with > > enough precision" bug have made me realize the importance of: > > > > 2. Me never having to read through the xdr* source code again. > > > > I'd love to dive right in and implement my wishlist item 1, but > > unfortunately that conflicts with wishlist item 2. My hands are tied. > > Hmm. Just to confuse you, I wrote the attached test program. It > creates a grid, solves a simple PDE on it, solves it twice again, then > writes everything to xdr files, reads them in and solves again. > Obviously (from the attached output, produced with ksp_monitor), > reading the files in does not exactly restore the original state. > However, I guess that changing this behaviour, even if it could become > an item 3 of your wishlist, would still conflict with item 2. > > Best Regards, > > Tim./test ksp_monitor > 0 KSP Residual norm 1.733328971787e+01 > 1 KSP Residual norm 1.185778941653e+01 > 2 KSP Residual norm 1.139134743178e+01 > 3 KSP Residual norm 9.913079902632e+00 > 4 KSP Residual norm 7.191145025915e+00 > 5 KSP Residual norm 5.172029883955e+00 > 6 KSP Residual norm 2.147425476520e+00 > 7 KSP Residual norm 8.687575748560e01 > 8 KSP Residual norm 4.164173667107e01 > 9 KSP Residual norm 2.443014766559e01 > 10 KSP Residual norm 1.888013690466e01 > 11 KSP Residual norm 1.545632671036e01 > 12 KSP Residual norm 9.547251214559e02 > 13 KSP Residual norm 3.558309380772e02 > 14 KSP Residual norm 2.119986479829e02 > 15 KSP Residual norm 1.472055648826e02 > 16 KSP Residual norm 7.844202526193e03 > 17 KSP Residual norm 4.324782781627e03 > 18 KSP Residual norm 2.537192913599e03 > 19 KSP Residual norm 1.372628313957e03 > 20 KSP Residual norm 7.072639404731e04 > 21 KSP Residual norm 2.981094657572e04 > 22 KSP Residual norm 1.807779624842e04 > 23 KSP Residual norm 1.340784951662e04 > 24 KSP Residual norm 1.088577505522e04 > 25 KSP Residual norm 7.207500151591e05 > 26 KSP Residual norm 3.887867139016e05 > 27 KSP Residual norm 2.190821106975e05 > 28 KSP Residual norm 1.040052415721e05 > 29 KSP Residual norm 3.856840518033e06 > 30 KSP Residual norm 1.989373498593e06 > 31 KSP Residual norm 1.442089144671e06 > 32 KSP Residual norm 8.398949810780e07 > 33 KSP Residual norm 4.613052610992e07 > 34 KSP Residual norm 2.446011262135e07 > 35 KSP Residual norm 1.402013920665e07 > 36 KSP Residual norm 9.288528753379e08 > 37 KSP Residual norm 5.569095323071e08 > 38 KSP Residual norm 3.080619009371e08 > 39 KSP Residual norm 2.149324393284e08 > 40 KSP Residual norm 1.668599996685e08 > 41 KSP Residual norm 1.297303487430e08 > 42 KSP Residual norm 9.127468731142e09 > 43 KSP Residual norm 6.183347158799e09 > 44 KSP Residual norm 4.688102496189e09 > 45 KSP Residual norm 4.418529129665e09 > 46 KSP Residual norm 4.266320304576e09 > 47 KSP Residual norm 4.097419313482e09 > 48 KSP Residual norm 3.918923888652e09 > 49 KSP Residual norm 3.395719439036e09 > 50 KSP Residual norm 2.538813185638e09 > 51 KSP Residual norm 1.415015667448e09 > 52 KSP Residual norm 6.308264312374e10 > 53 KSP Residual norm 3.843345277332e10 > 54 KSP Residual norm 2.514734506456e10 > 55 KSP Residual norm 1.710479832166e10 > 56 KSP Residual norm 1.151796274421e10 > 57 KSP Residual norm 7.714264551552e11 > 58 KSP Residual norm 4.742723602833e11 > 59 KSP Residual norm 3.379626986680e11 > 60 KSP Residual norm 2.548253614890e11 > 61 KSP Residual norm 2.095190091567e11 > 62 KSP Residual norm 1.512617662327e11 >  > 0 KSP Residual norm 1.510050418779e11 >  > 0 KSP Residual norm 1.510050418779e11 >  > 0 KSP Residual norm 1.611365184359e11 > #include <iostream> > #include <algorithm> > #include <math.h> > > #include "libmesh.h" > #include "mesh.h" > #include "mesh_generation.h" > #include "gmv_io.h" > #include "equation_systems.h" > #include "fe.h" > #include "quadrature_gauss.h" > #include "dof_map.h" > #include "sparse_matrix.h" > #include "numeric_vector.h" > #include "dense_matrix.h" > #include "dense_vector.h" > #include "linear_implicit_system.h" > #include "transient_system.h" > #include "perf_log.h" > #include "boundary_info.h" > #include "utility.h" > #include "elem.h" > #include "mesh_refinement.h" > > void assemble(EquationSystems& es, const std::string& system_name) > { > const Mesh& mesh = es.get_mesh(); > const unsigned int dim = mesh.mesh_dimension(); > LinearImplicitSystem& system = es.get_system<LinearImplicitSystem> ("e"); > const DofMap& dof_map = system.get_dof_map(); > FEType fe_type = dof_map.variable_type(0); > AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); > QGauss qrule (dim, FIFTH); > fe>attach_quadrature_rule (&qrule); > const std::vector<Real>& JxW = fe>get_JxW(); > const std::vector<std::vector<Real> >& phi = fe>get_phi(); > const std::vector<std::vector<RealGradient> >& dphi = fe>get_dphi(); > DenseMatrix<Number> Ke; > DenseVector<Number> Fe; > std::vector<unsigned int> dof_indices; > > MeshBase::const_element_iterator el = mesh.elements_begin(); > const MeshBase::const_element_iterator end_el = mesh.elements_end(); > for ( ; el != end_el ; ++el) > { > const Elem* elem = *el; > dof_map.dof_indices (elem, dof_indices); > fe>reinit (elem); > Ke.resize (dof_indices.size(), > dof_indices.size()); > Fe.resize (dof_indices.size()); > > for (unsigned int qp=0; qp<qrule.n_points(); qp++) > for (unsigned int i=0; i<phi.size(); i++) > { > for (unsigned int j=0; j<phi.size(); j++) > { > Ke(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp]+phi[i][qp]*phi[j][qp]); > } > for (unsigned int i=0; i<phi.size(); i++) > { > Fe(i) += JxW[qp]*1.0*phi[i][qp]; > } > } > > system.matrix>add_matrix (Ke, dof_indices); > system.rhs>add_vector (Fe, dof_indices); > } > } > > int main (int argc, char** argv) > { > libMesh::init (argc, argv); > { > const unsigned int dim = 3; > Mesh mesh (dim); > > MeshTools::Generation::build_cube (mesh, > 4, 4, 4, > 0., 1., > 0., 1., > 0., 1., > HEX27); > MeshRefinement mesh_refinement(mesh); > mesh_refinement.uniformly_refine(1); > EquationSystems equation_systems (mesh); > equation_systems.add_system<LinearImplicitSystem> ("e"); > equation_systems.get_system("e").add_variable("u", SECOND); > equation_systems.get_system("e").attach_assemble_function (assemble); > equation_systems.init(); > equation_systems.get_system("e").solve(); > > std::cout << "" << std::endl; > > equation_systems.get_system("e").solve(); > > std::cout << "" << std::endl; > > equation_systems.get_system("e").solve(); > > std::cout << "" << std::endl; > > mesh.write("mesh.xdr"); > equation_systems.write("systems.xdr", > libMeshEnums::WRITE, > EquationSystems::WRITE_DATAEquationSystems::WRITE_ADDITIONAL_DATA); > > Mesh mesh2(dim); > mesh2.read("mesh.xdr"); > EquationSystems equation_systems2(mesh2); > equation_systems2.read("systems.xdr", > libMeshEnums::READ, > EquationSystems::READ_DATAEquationSystems::READ_ADDITIONAL_DATAEquationSystems::READ_HEADER); > equation_systems2.reinit(); > equation_systems2.get_system("e").attach_assemble_function (assemble); > equation_systems2.get_system("e").solve(); > } > > return libMesh::close (); > } > >  > Take Surveys. Earn Cash. Influence the Future of IT > Join SourceForge.net's Techsay panel and you'll get the chance to share your > opinions on IT & business topics through brief surveysand earn cash > http://www.techsay.com/default.php?page=join.php&p=sourceforge&CID=DEVDEV_______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: <tim@ce...>  20070220 08:13:49

Dear Roy, On Mon, 19 Feb 2007, Roy Stogner wrote: > My current work with another libMesh user has made me realize that we > need: > > 1. Support for taking .xda/.xdr files which have been written with > one setting of ENABLE_INFINITE_ELEMENTS and reading them with the > opposite setting. Currently this seems to break. > > However, my recent experiences fixing the "scrambled DoF indices with > nonLagrange AMR restarts" bug and the "not writing .xda files with > enough precision" bug have made me realize the importance of: > > 2. Me never having to read through the xdr* source code again. > > I'd love to dive right in and implement my wishlist item 1, but > unfortunately that conflicts with wishlist item 2. My hands are tied. Hmm. Just to confuse you, I wrote the attached test program. It creates a grid, solves a simple PDE on it, solves it twice again, then writes everything to xdr files, reads them in and solves again. Obviously (from the attached output, produced with ksp_monitor), reading the files in does not exactly restore the original state. However, I guess that changing this behaviour, even if it could become an item 3 of your wishlist, would still conflict with item 2. Best Regards, Tim 
From: sun <pythonsun@gm...>  20070220 08:13:27

Hi, all, firstly I want to see what the libmesh can work for and try some 'examples' in the download files. Can I do this in Visual Studio? sun 
From: Roy Stogner <roystgnr@ic...>  20070219 19:44:57

My current work with another libMesh user has made me realize that we need: 1. Support for taking .xda/.xdr files which have been written with one setting of ENABLE_INFINITE_ELEMENTS and reading them with the opposite setting. Currently this seems to break. However, my recent experiences fixing the "scrambled DoF indices with nonLagrange AMR restarts" bug and the "not writing .xda files with enough precision" bug have made me realize the importance of: 2. Me never having to read through the xdr* source code again. I'd love to dive right in and implement my wishlist item 1, but unfortunately that conflicts with wishlist item 2. My hands are tied.  Roy 
From: <tim@ce...>  20070216 07:00:36

Dear all, On Thu, 15 Feb 2007, Roy Stogner wrote: > On Thu, 15 Feb 2007, John Peterson wrote: > >> Roy Stogner writes: >>> >>> I don't know  we already have the slightly confusing standard of >>> n_elem > n_active_elem because the former includes ancestor elements >>> and possibly subactive elements  why can't it include NULL elements >>> as well? ;) >> >> Yes, and there may be some utility to having a fast "upper bound" on >> the number of elements and nodes, for things which just want to >> preallocate an approximate amount of memory... _elements.size() is >> definitely a reliable upper bound. > > I was mostly joking, but you're right. We probably should have public > functions like "max_elem_id()" and "max_node_id()" to use in places > where code just wants to generate a perelement vector; mostly code > now is just using n_elem(). What about leaving n_elem() as it is and adding a new function accurate_n_elem() (or what ever a good name is) that does not count NULL elements? This way, you can be sure that you don't break anyone's code. Best Regards, Tim 
From: John Peterson <peterson@cf...>  20070215 16:56:39

Kirk, Benjamin \(JSCEG\) writes: > Why not wrap some safety code inside > > #ifdef NDEBUG > (compute O(N) slow values) > (assert slow way = vector.size()) > #endif > > To catch when they may be wrong? > > Are you hitting a case where you need them but they are wrong?? No, but I think fixing this now could prevent a potential future bug hunt in an unrelated problem. It's definitely not a commonuse problem at all. > > > > Ben > > > > > Original Message > From: libmeshdevelbounces@... > [mailto:libmeshdevelbounces@...] On Behalf Of John > Peterson > Sent: Thursday, February 15, 2007 10:16 AM > To: libmeshdevel@... > Subject: [Libmeshdevel] Mesh::n_elem() and Mesh::n_nodes() > > Hi, > > These functions are O(1) "fast" (they just return the sizes of the > underlying std::vectors of Nodes and Elements) but they are not always > 100% correct. > > For example, this is not correct during coarsening when elements are > being deleted and their vector entries are being replaced by NULLs. > It's also not correct if I build a mesh and randomly delete an element > from it, which is allowed by the current interface as well. > > An alternative is to replace these implementations with something > similar to the n_active_elem() functions (defined in mesh_base.C) which > use the Mesh iterators but are O(N) complexity. > > Another possible try would be to keep a "running total" of the number of > nodes/elements that would be incremented/decremented by the > Mesh::add_elem(), Mesh::delete_elem(), Mesh::add_point(), and > Mesh::delete_node() functions. However, I see this as potentially > errorprone as well. > > Thoughts? > John 
From: Roy Stogner <roystgnr@ic...>  20070215 16:46:22

On Thu, 15 Feb 2007, John Peterson wrote: > Roy Stogner writes: > > > > I don't know  we already have the slightly confusing standard of > > n_elem > n_active_elem because the former includes ancestor elements > > and possibly subactive elements  why can't it include NULL elements > > as well? ;) > > Yes, and there may be some utility to having a fast "upper bound" on > the number of elements and nodes, for things which just want to > preallocate an approximate amount of memory... _elements.size() is > definitely a reliable upper bound. I was mostly joking, but you're right. We probably should have public functions like "max_elem_id()" and "max_node_id()" to use in places where code just wants to generate a perelement vector; mostly code now is just using n_elem().  Roy 
From: John Peterson <peterson@cf...>  20070215 16:37:56

Roy Stogner writes: > On Thu, 15 Feb 2007, John Peterson wrote: > > > These functions are O(1) "fast" (they just return the sizes of the > > underlying std::vectors of Nodes and Elements) but they are not always > > 100% correct. > > I don't know  we already have the slightly confusing standard of > n_elem > n_active_elem because the former includes ancestor elements > and possibly subactive elements  why can't it include NULL elements > as well? ;) Yes, and there may be some utility to having a fast "upper bound" on the number of elements and nodes, for things which just want to preallocate an approximate amount of memory... _elements.size() is definitely a reliable upper bound. > > An alternative is to replace these implementations with something > > similar to the n_active_elem() functions (defined in mesh_base.C) > > which use the Mesh iterators but are O(N) complexity. > > > > Another possible try would be to keep a "running total" of the number > > of nodes/elements that would be incremented/decremented by the > > Mesh::add_elem(), Mesh::delete_elem(), Mesh::add_point(), and > > Mesh::delete_node() functions. However, I see this as potentially > > errorprone as well. > > > > Thoughts? > > O(N) is too expensive when it's in a function that often gets tested > after every iteration of a for loop. > > I'd try to keep a running total, but then check it with the O(N) algorithm > when DEBUG is defined. The element and node vectors are private, so > the potential for adding bugs to the running total should at least > just be limited to the Mesh class and friends. Alright, this is what occurred to me directly after sending the previous email as well. A nice #ifdef DEBUG which doublechecks the running total. > This might also be a good time to go through the code and try to get > rid of any n_elem() calls we can. Just grepping through the code I > see a few places where we ought to be using iterators instead. Yes, me as well. I'm changing several meshrelated files at the moment to reenable the build_sphere() capability. Unless you really trust CVS :) you might want to hold off on huge amounts of changes till I check this in (probably around noon or before). John 
From: Roy Stogner <roystgnr@ic...>  20070215 16:30:26

On Thu, 15 Feb 2007, John Peterson wrote: > These functions are O(1) "fast" (they just return the sizes of the > underlying std::vectors of Nodes and Elements) but they are not always > 100% correct. I don't know  we already have the slightly confusing standard of n_elem > n_active_elem because the former includes ancestor elements and possibly subactive elements  why can't it include NULL elements as well? ;) > An alternative is to replace these implementations with something > similar to the n_active_elem() functions (defined in mesh_base.C) > which use the Mesh iterators but are O(N) complexity. > > Another possible try would be to keep a "running total" of the number > of nodes/elements that would be incremented/decremented by the > Mesh::add_elem(), Mesh::delete_elem(), Mesh::add_point(), and > Mesh::delete_node() functions. However, I see this as potentially > errorprone as well. > > Thoughts? O(N) is too expensive when it's in a function that often gets tested after every iteration of a for loop. I'd try to keep a running total, but then check it with the O(N) algorithm when DEBUG is defined. The element and node vectors are private, so the potential for adding bugs to the running total should at least just be limited to the Mesh class and friends. This might also be a good time to go through the code and try to get rid of any n_elem() calls we can. Just grepping through the code I see a few places where we ought to be using iterators instead.  Roy 
From: Kirk, Benjamin \(JSCEG\) <benjamin.kirk1@na...>  20070215 16:21:06

Why not wrap some safety code inside #ifdef NDEBUG (compute O(N) slow values) (assert slow way =3D vector.size()) #endif=20 To catch when they may be wrong? =20 Are you hitting a case where you need them but they are wrong?? Ben Original Message From: libmeshdevelbounces@... [mailto:libmeshdevelbounces@...] On Behalf Of John Peterson Sent: Thursday, February 15, 2007 10:16 AM To: libmeshdevel@... Subject: [Libmeshdevel] Mesh::n_elem() and Mesh::n_nodes() Hi, These functions are O(1) "fast" (they just return the sizes of the underlying std::vectors of Nodes and Elements) but they are not always 100% correct. For example, this is not correct during coarsening when elements are being deleted and their vector entries are being replaced by NULLs. It's also not correct if I build a mesh and randomly delete an element from it, which is allowed by the current interface as well. An alternative is to replace these implementations with something similar to the n_active_elem() functions (defined in mesh_base.C) which use the Mesh iterators but are O(N) complexity. Another possible try would be to keep a "running total" of the number of nodes/elements that would be incremented/decremented by the Mesh::add_elem(), Mesh::delete_elem(), Mesh::add_point(), and Mesh::delete_node() functions. However, I see this as potentially errorprone as well. Thoughts? John   Take Surveys. Earn Cash. Influence the Future of IT Join SourceForge.net's Techsay panel and you'll get the chance to share your opinions on IT & business topics through brief surveysand earn cash http://www.techsay.com/default.php?page=3Djoin.php&p=3Dsourceforge&CID=3D= DEVDE V _______________________________________________ Libmeshdevel mailing list Libmeshdevel@... https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: John Peterson <peterson@cf...>  20070215 16:16:14

Hi, These functions are O(1) "fast" (they just return the sizes of the underlying std::vectors of Nodes and Elements) but they are not always 100% correct. For example, this is not correct during coarsening when elements are being deleted and their vector entries are being replaced by NULLs. It's also not correct if I build a mesh and randomly delete an element from it, which is allowed by the current interface as well. An alternative is to replace these implementations with something similar to the n_active_elem() functions (defined in mesh_base.C) which use the Mesh iterators but are O(N) complexity. Another possible try would be to keep a "running total" of the number of nodes/elements that would be incremented/decremented by the Mesh::add_elem(), Mesh::delete_elem(), Mesh::add_point(), and Mesh::delete_node() functions. However, I see this as potentially errorprone as well. Thoughts? John 
From: John Peterson <peterson@cf...>  20070213 16:09:31

Martin L,b(Bthi writes: > Hi John > > John Peterson writes: > > Thanks for reporting that bug. Please try updating and running > > your test code again  hopefully it is fixed. > > Thanks a lot for your very fast response. Unfortunately the example > code still breaks in exactly the same way. Doh! For the original problem, instead of a segfault I hit an assert in debug mode. So it may be that, although there is still a segfault, it is now for some other reason. > Just some guesses: > > o All nodes are copied in the beginning, but they are still references > to the initial mesh. Upon cleanup they get destroyed and so the > original mesh has many corrupted references. They aren't references to the original mesh... they are their own separate memoryallocatedfor entities. One should be able to completely delete the original mesh and still have everything function perfectly (in theory). > o Is the BoundaryMesh renumbered after creation, say, when writing to > a file? The BoundaryMesh would potentially be renumbered if prepare_for_use is ever called. > I'll try to find out where the problem happens, as soon as I am able > to build the library in debug mode (currently I'm having problems with > tetmesh's tetrahedralize not being defined). Yeah, I'll need a simple, reproducible test case which demonstrates the problem to be able to continue bug hunting. Do you mean tetgen? I think it is working OK in debug mode for me anyway. J 
From: Martin Lüthi <luethi@va...>  20070213 15:53:07

Hi John John Peterson writes: > Thanks for reporting that bug. Please try updating and running > your test code again  hopefully it is fixed. Thanks a lot for your very fast response. Unfortunately the example code still breaks in exactly the same way. Just some guesses: o All nodes are copied in the beginning, but they are still references to the initial mesh. Upon cleanup they get destroyed and so the original mesh has many corrupted references. o Is the BoundaryMesh renumbered after creation, say, when writing to a file? I'll try to find out where the problem happens, as soon as I am able to build the library in debug mode (currently I'm having problems with tetmesh's tetrahedralize not being defined). Thanks again Martin 