Screenshot instructions:
Windows
Mac
Red Hat Linux
Ubuntu
Click URL instructions:
Right-click on ad, choose "Copy Link", then paste here →
(This may not be possible with some types of ads)
You can subscribe to this list here.
2003 |
Jan
(4) |
Feb
(1) |
Mar
(9) |
Apr
(2) |
May
(7) |
Jun
(1) |
Jul
(1) |
Aug
(4) |
Sep
(12) |
Oct
(8) |
Nov
(3) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(21) |
Mar
(31) |
Apr
(10) |
May
(12) |
Jun
(15) |
Jul
(4) |
Aug
(6) |
Sep
(5) |
Oct
(11) |
Nov
(43) |
Dec
(13) |
2005 |
Jan
(25) |
Feb
(12) |
Mar
(49) |
Apr
(19) |
May
(104) |
Jun
(60) |
Jul
(10) |
Aug
(42) |
Sep
(15) |
Oct
(12) |
Nov
(6) |
Dec
(4) |
2006 |
Jan
(1) |
Feb
(6) |
Mar
(31) |
Apr
(17) |
May
(5) |
Jun
(95) |
Jul
(38) |
Aug
(44) |
Sep
(6) |
Oct
(8) |
Nov
(21) |
Dec
|
2007 |
Jan
(5) |
Feb
(46) |
Mar
(9) |
Apr
(23) |
May
(17) |
Jun
(51) |
Jul
(41) |
Aug
(4) |
Sep
(28) |
Oct
(71) |
Nov
(193) |
Dec
(20) |
2008 |
Jan
(46) |
Feb
(46) |
Mar
(18) |
Apr
(38) |
May
(14) |
Jun
(107) |
Jul
(50) |
Aug
(115) |
Sep
(84) |
Oct
(96) |
Nov
(105) |
Dec
(34) |
2009 |
Jan
(89) |
Feb
(93) |
Mar
(119) |
Apr
(73) |
May
(39) |
Jun
(51) |
Jul
(27) |
Aug
(8) |
Sep
(91) |
Oct
(90) |
Nov
(77) |
Dec
(67) |
2010 |
Jan
(25) |
Feb
(36) |
Mar
(98) |
Apr
(45) |
May
(25) |
Jun
(60) |
Jul
(17) |
Aug
(36) |
Sep
(48) |
Oct
(45) |
Nov
(65) |
Dec
(39) |
2011 |
Jan
(26) |
Feb
(48) |
Mar
(151) |
Apr
(108) |
May
(61) |
Jun
(108) |
Jul
(27) |
Aug
(50) |
Sep
(43) |
Oct
(43) |
Nov
(27) |
Dec
(37) |
2012 |
Jan
(56) |
Feb
(120) |
Mar
(72) |
Apr
(57) |
May
(82) |
Jun
(66) |
Jul
(51) |
Aug
(75) |
Sep
(166) |
Oct
(232) |
Nov
(284) |
Dec
(105) |
2013 |
Jan
(168) |
Feb
(151) |
Mar
(30) |
Apr
(145) |
May
(26) |
Jun
(53) |
Jul
(76) |
Aug
(33) |
Sep
(23) |
Oct
(72) |
Nov
(125) |
Dec
(38) |
2014 |
Jan
(47) |
Feb
(62) |
Mar
(27) |
Apr
(8) |
May
(12) |
Jun
(2) |
Jul
(22) |
Aug
(22) |
Sep
|
Oct
(17) |
Nov
(20) |
Dec
(12) |
2015 |
Jan
(25) |
Feb
(2) |
Mar
(16) |
Apr
(13) |
May
(21) |
Jun
(5) |
Jul
(1) |
Aug
(8) |
Sep
(9) |
Oct
(30) |
Nov
(8) |
Dec
|
2016 |
Jan
(16) |
Feb
(31) |
Mar
(43) |
Apr
(18) |
May
(21) |
Jun
(11) |
Jul
(17) |
Aug
(26) |
Sep
(4) |
Oct
(16) |
Nov
(5) |
Dec
(6) |
2017 |
Jan
(1) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(1) |
Jun
(11) |
Jul
(5) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
2018 |
Jan
(8) |
Feb
(8) |
Mar
(1) |
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
S | M | T | W | T | F | S |
---|---|---|---|---|---|---|
1
|
2
|
3
(12) |
4
(3) |
5
|
6
|
7
|
8
(12) |
9
(3) |
10
|
11
|
12
(2) |
13
(6) |
14
|
15
|
16
|
17
|
18
|
19
|
20
|
21
|
22
|
23
|
24
|
25
(2) |
26
(1) |
27
|
28
|
29
|
30
|
31
|
|
|
|
|
From: John Peterson <peterson@cf...> - 2007-07-26 02:25:51
|
deal.II is threaded, and Ben, you wrote the original PETSc support for deal.II, right? Was there not a question of threading, or did they just use PETSc with a single thread somehow? -John Benjamin Kirk writes: > It raises an interesting question, though. BOOST has a nice threads > interface which I look at maybe once every 6 months. The PETSc question is > the crux of the issue. > > Threading will essentially give us one process with a bunch of sub-processes > at the loop level. This will allow n-processors to share 1 mesh, for > example. But at the end of the day, we will build a PETSc matrix which is > split across MPI communicators. Somehow there needs to be a way to get an > MPI process associated with each thread. > > There has got to be a "standard" way to do that, but I am not sure how. > Also, the following does not help our case: > http://www-unix.mcs.anl.gov/petsc/petsc-2/miscellaneous/petscthreads.html > > > -Ben > > > > > > On 7/25/07 9:56 AM, "Roy Stogner" <roystgnr@...> wrote: > > > On Wed, 25 Jul 2007, Roy Stogner wrote: > > > >> I don't see how it would work with PETSc - does anybody know if > >> there's a way to turn on that sort of heterogenous parallelism in > >> their code? > > > > And even if it would work with PETSc, it looks like it wouldn't work > > with CPUs other than relatively recent x86 and amd64 compatible > > offerings. Sorry, everyone, false alarm. :-( > > --- > > Roy > > > > ------------------------------------------------------------------------- > > This SF.net email is sponsored by: Splunk Inc. > > Still grepping through log files to find problems? Stop. > > Now Search log events and configuration files using AJAX and a browser. > > Download your FREE copy of Splunk now >> http://get.splunk.com/ > > _______________________________________________ > > Libmesh-devel mailing list > > Libmesh-devel@... > > https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: Roy Stogner <roystgnr@ic...> - 2007-07-25 14:57:54
|
On Wed, 25 Jul 2007, Roy Stogner wrote: > I don't see how it would work with PETSc - does anybody know if > there's a way to turn on that sort of heterogenous parallelism in > their code? And even if it would work with PETSc, it looks like it wouldn't work with CPUs other than relatively recent x86 and amd64 compatible offerings. Sorry, everyone, false alarm. :-( --- Roy |
From: Roy Stogner <roystgnr@ic...> - 2007-07-25 14:52:18
|
Intel has a new API out that encapsulates threaded C++ code in nice constructs like parallel_for. It would be fantastic if we could use this for libMesh, using MPI for inter-node parallelism and threading for parallelism on shared-memory nodes (and incidentally reducing the number of Mesh copies on nodes I use from 4 to 1). That would probably even be worth the hassle of making sure to use a thread-safe MPI implementation. Unfortunately, I don't see how it would work with PETSc - does anybody know if there's a way to turn on that sort of heterogenous parallelism in their code? --- Roy |
From: John Peterson <peterson@cf...> - 2007-07-13 16:09:21
|
Derek Gaston writes: > > I also like the idea of a ParallelMesh class that derives from Mesh. > I don't however like the look of either of: > > > if (EquationSystems::MeshType == ParallelMesh) > > ... > > > > Or else a partial specialization for > > EquationSystems<ParallelMesh>::reinit(); > > I would like to hope that we could design the ParallelMesh class so > that EquationSystems would _not_ have to know what kind of mesh is > being used. I believe this should be a major design goal. It still > might mean some changes to EquationSystems, but at the end of the day > EquationSystems should just take a Mesh object, and not care about > anything else. Agree. If we start templating on Mesh type I think pretty soon the whole library would become templated on Mesh type. This is exactly what happens if you start templating on space dimension, as I have heard some FE libraries do ;) Before it got to that, I would rather have a polymorphic solution (e.g. for EquationSystems) ESBase -----------^ ^--------- ES ParallelES (In case my ascii art doesn't come across, I just mean an abstract base class of EquationSystems where the implementation determines what happens in the parallel case) But the best solution is the one Derek mentioned: the Mesh interface is such that code using it doesn't know if the Mesh is parallel or not. It's definitely possible that we won't be able to do this in all instances, however. Consider the MeshTools for instance: build_square(MeshBase&, ...) would *ideally* not need to be changed *at all* for parallel Mesh generation, but somehow I don't see that happening. The add_elem() interface would have to somehow "ignore/not add" the elements that would eventually belong to other CPUs. > > BTW - I'm _REALLY_ glad to see this discussion being had... for > reasons I can't divulge at the moment..... Well, you could, but then you would have to kill all of us D: -Joh |
From: Roy Stogner <roystgnr@ic...> - 2007-07-13 16:03:08
|
On Fri, 13 Jul 2007, Derek Gaston wrote: > That said, I do like the idea of processor 0 "dishing out" processor > sized chunks at a time. But, I'm not sure how it would decide what a > processor sized chunk is... A "processor sized chunk" is just the total number of elements divided by the number of processors (rounding up). > or how it would figure out what a good pattern for dishing out the > chunks would be. Probably nothing more complicated than "the first elements we see in the file go to processor n, the next go to n-1, etc." > This is the job of metis/parmetis... so either we need to use Metis > offline _first_ to get an original splitting (like Sierra), or we're > going to have to dish out all of the chunks and then call Parmetis > to do a better partitioning... and then do a lot of crazy > communication to get the mesh situated correctly. Both ways have > their drawbacks. I vote for "crazy communication". In the long run we want to handle adaptivity efficiently on parallel meshes, and that's going to require repartitioning on the fly anyway. In the short run, I just prefer writing complicated library code once over running through a complicated workthrough for every application. > BTW: Parmetis is exactly what it says it is... _Par_metis. It is > supposed to work in parallel on parallel meshes... Well, libMesh is supposed to work in parallel too, and that's mostly true, but clearly there's parallel and then there's *parallel*. ;-) Good to hear confirmation, though. --- Roy |
From: Derek Gaston <friedmud@gm...> - 2007-07-13 15:49:24
|
Let me give a bit of insight from how we do mesh reading/writing in parallel in Sierra.... What we do, is partition the mesh _offline_ first. Then, when we start the program, each processor reads only it's mesh file. This is advantageous because you can find some large memory machine to do the partitioning on... and then run on a cluster where each node has a small amount of memory. As for writing... it's the same. Each node writes out it's portion of the solution... then, after the simulation is finished you can bring those solutions over to a large memory machine and "concat" them all together. Again, this means that the machine you solve on never has to have enough memory to hold the entire mesh. Now, of course, this definitely has some drawbacks... all the pre-post processing steps are kind of cumbersome (they can be done automatically if you ask for it, but there is no way to do it automatically if your mesh won't fit on the machine), but it does give some food for thought. That said, I do like the idea of processor 0 "dishing out" processor sized chunks at a time. But, I'm not sure how it would decide what a processor sized chunk is... or how it would figure out what a good pattern for dishing out the chunks would be. This is the job of metis/parmetis... so either we need to use Metis offline _first_ to get an original splitting (like Sierra), or we're going to have to dish out all of the chunks and then call Parmetis to do a better partitioning... and then do a lot of crazy communication to get the mesh situated correctly. Both ways have their drawbacks. BTW: Parmetis is exactly what it says it is... _Par_metis. It is supposed to work in parallel on parallel meshes... I also like the idea of a ParallelMesh class that derives from Mesh. I don't however like the look of either of: > if (EquationSystems::MeshType == ParallelMesh) > ... > > Or else a partial specialization for > EquationSystems<ParallelMesh>::reinit(); I would like to hope that we could design the ParallelMesh class so that EquationSystems would _not_ have to know what kind of mesh is being used. I believe this should be a major design goal. It still might mean some changes to EquationSystems, but at the end of the day EquationSystems should just take a Mesh object, and not care about anything else. BTW - I'm _REALLY_ glad to see this discussion being had... for reasons I can't divulge at the moment..... Derek On 7/13/07, John Peterson <peterson@...> wrote: > Roy Stogner writes: > > On Thu, 12 Jul 2007, John Peterson wrote: > > > > > I think this definitely takes us in the right direction, though I don't > > > doubt there will be several gotchas even with this first step. We should > > > also think about meshes which can't fit on a single processor... > > > > > > Such a mesh would be ungainly (how would we store it, multiple > > > data files?) but you can certainly build_square() something which > > > is too large for a single CPU, and it would be cool if that eventually > > > worked in parallel w/o ever having to store the whole thing on 1 CPU. > > > > Absolutely. > > Oh, and I forgot to mention, even if your mesh *is* small enough to fit > on 1 CPU (i.e. you can successfully open the mesh file and read it all in) > it may be too big for 16 copies to fit, supposing you are running on > some kind of quad-quad box. I don't think fewer cores are coming back > in style any time soon. > > > I think Ben's got the right idea about storage (even though it makes > > Mesh::read() tricky, since we'd have to make multiple passes to avoid > > reading in too many elements/nodes at once). What worries me about > > meshes too big to fit on one node is repartitioning. Will Parmetis > > work that way? The space-filling-curve repartitioners might be ideal > > for use in parallel; putting all the elements in some 1D order is easy > > to do in parallel, and then it's easy to negotiate which processors > > get which elements. > > I believe parmetis is designed to handle partitioning in parallel (based > on the name only, I haven't tried it). So it should work if we read in > a chunk of elements, send that out to a processor, and repeat until we've > read them all in. Then a "real" partitioner could partition them correctly, > incurring additional communication overhead of course. > > We also implemented a parallel sort algorithm some years back (I have this > code somewhere, I'm sure) which would probably allow the space-filling curve > partitioning you suggest. > > -John > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Libmesh-devel mailing list > Libmesh-devel@... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > |
From: John Peterson <peterson@cf...> - 2007-07-13 14:37:55
|
Roy Stogner writes: > On Thu, 12 Jul 2007, John Peterson wrote: > > > I think this definitely takes us in the right direction, though I don't > > doubt there will be several gotchas even with this first step. We should > > also think about meshes which can't fit on a single processor... > > > > Such a mesh would be ungainly (how would we store it, multiple > > data files?) but you can certainly build_square() something which > > is too large for a single CPU, and it would be cool if that eventually > > worked in parallel w/o ever having to store the whole thing on 1 CPU. > > Absolutely. Oh, and I forgot to mention, even if your mesh *is* small enough to fit on 1 CPU (i.e. you can successfully open the mesh file and read it all in) it may be too big for 16 copies to fit, supposing you are running on some kind of quad-quad box. I don't think fewer cores are coming back in style any time soon. > I think Ben's got the right idea about storage (even though it makes > Mesh::read() tricky, since we'd have to make multiple passes to avoid > reading in too many elements/nodes at once). What worries me about > meshes too big to fit on one node is repartitioning. Will Parmetis > work that way? The space-filling-curve repartitioners might be ideal > for use in parallel; putting all the elements in some 1D order is easy > to do in parallel, and then it's easy to negotiate which processors > get which elements. I believe parmetis is designed to handle partitioning in parallel (based on the name only, I haven't tried it). So it should work if we read in a chunk of elements, send that out to a processor, and repeat until we've read them all in. Then a "real" partitioner could partition them correctly, incurring additional communication overhead of course. We also implemented a parallel sort algorithm some years back (I have this code somewhere, I'm sure) which would probably allow the space-filling curve partitioning you suggest. -John |
From: Roy Stogner <roystgnr@ic...> - 2007-07-13 13:52:53
|
On Thu, 12 Jul 2007, John Peterson wrote: > I think this definitely takes us in the right direction, though I don't > doubt there will be several gotchas even with this first step. We should > also think about meshes which can't fit on a single processor... > > Such a mesh would be ungainly (how would we store it, multiple > data files?) but you can certainly build_square() something which > is too large for a single CPU, and it would be cool if that eventually > worked in parallel w/o ever having to store the whole thing on 1 CPU. Absolutely. I think Ben's got the right idea about storage (even though it makes Mesh::read() tricky, since we'd have to make multiple passes to avoid reading in too many elements/nodes at once). What worries me about meshes too big to fit on one node is repartitioning. Will Parmetis work that way? The space-filling-curve repartitioners might be ideal for use in parallel; putting all the elements in some 1D order is easy to do in parallel, and then it's easy to negotiate which processors get which elements. --- Roy |
From: Roy Stogner <roystgnr@ic...> - 2007-07-13 13:48:30
|
On Thu, 12 Jul 2007, Benjamin Kirk wrote: > First off, let me say I am really, really glad to see this discssion. ;-) Yeah, it's been running around my head a couple times, but it occured to me that even a disjointed post to libmesh-devel would probably be more productive than an internal monologue. >>> After EquationSystems::init() is called, all of the Systems' DofMap >>> objects have had their chance to walk all over the Mesh, and then in >>> most cases every processor should stick to its local and ghost >>> elements until it's time to do adaptive refinement - am I right about >>> that? If so, then codes which aren't using AMR/C (and aren't using >>> MeshFunction, etc) could call Mesh::parallelize() at this time, and >>> all "parallelize()" would have to do would be to delete a lot of Elem >>> and Node objects. > > The only immediate gotcha I see is the Mesh::write() methods. In GMV, for > example, it is assumed that processor 0 holds the entire mesh and thus can > write the nodal positions and element connectivity every time GMVIO::write() > is called. This is a gotcha in the sense that my existing application code will break, I'll grant you that. ;-) But there is a short term workaround (codes which don't do any refinement don't need to write out the mesh more than once, which they can do before parallelize()) for this problem. And basically the goal of starting with just Mesh::parallelize() would be to root out such problems with testing rather than leaving us to speculate about them theoretically. > Note that this could be handled in kinda the reverse process of broadcasting > the mesh. A gather operation could be put into MeshCommunication. This > could be a minimal operation that gathers the local elements and nodes for > each processor one at a time. That way processor 0 can still write the > entire buffer, but it can do it in subdomain size chunks and avoid having to > store the entire list of nodes or connectivity. Sounds reasonable. > In the spirit of object-oriented-ness, though, I don't see this as a strong > argument against a ParallelMesh class. Rather, we probably should move in > this direction. It is just at the moment there will be little difference > between Mesh and ParallelMesh. > > I would think we should create ParallelMesh which is derived from Mesh. > They might as well be identical at the moment, thus making ParallelMesh > easy: > > class ParallelMesh : public Mesh > {}; Alright, I'm convinced; we can start from there. I'll add a "parallel_mesh.h" and "parallel_mesh.C" file to CVS soon, even if it may be a month or two before I can work much on this in earnest. > We can then implement the other changes you suggest very easily with > operator overloading and/or (boo, hiss) templates. For example, > EquationSystems could (should?) be templated on mesh type, and we might wind > up with some code like > > if (EquationSystems::MeshType == ParallelMesh) > ... > > Or else a partial specialization for > EquationSystems<ParallelMesh>::reinit(); > > Thoughts? You and your templates. I swear the masses of switch statements in FE<> are at least as inefficient as virtual function calls, and they're definitely twice as ugly. ;-) Let's leave Mesh and ParallelMesh as simple derived classes for now? Even if they're separate classes most of the code will be shared, and I suspect there won't be any virtual functions called frequently enough that we'd want to try to optimize them away with templates. --- Roy |
From: John Peterson <peterson@cf...> - 2007-07-12 22:59:58
|
Roy Stogner writes: > > Just to throw out some ideas: > > After EquationSystems::init() is called, all of the Systems' DofMap > objects have had their chance to walk all over the Mesh, and then in > most cases every processor should stick to its local and ghost > elements until it's time to do adaptive refinement - am I right about > that? If so, then codes which aren't using AMR/C (and aren't using > MeshFunction, etc) could call Mesh::parallelize() at this time, and > all "parallelize()" would have to do would be to delete a lot of Elem > and Node objects. > > Even for codes which are using adaptive refinement, would it be a good > stopgap solution to temporarily serialize the Mesh during > EquationSystems::reinit() calls? The sequence would go something > like: > > 1. Delete System matrices (which are invalid anyway since we're > changing the mesh), making room for a serial Mesh > 2. Serialize the Mesh (which might not be much harder than the code in > Mesh::read; there's just lots of tricks like remembering to copy > old_dof_objects) > 3. Repartition the mesh, call System::project_vector, etc. > 4. Parallelize the Mesh > > Of course, this won't work as is (the MeshRefinement flagging schemes > assume a serial Mesh, for instance), but it might be a good place to > start without creating a huge new ParallelMesh class or breaking > existing code. > > > If anyone has a better name suggestion for Mesh::parallelize() > (perhaps Mesh::raze()?) or any thoughts to add, let me know. I think this definitely takes us in the right direction, though I don't doubt there will be several gotchas even with this first step. We should also think about meshes which can't fit on a single processor... Such a mesh would be ungainly (how would we store it, multiple data files?) but you can certainly build_square() something which is too large for a single CPU, and it would be cool if that eventually worked in parallel w/o ever having to store the whole thing on 1 CPU. -John |
From: Roy Stogner <roystgnr@ic...> - 2007-07-12 22:52:55
|
Just to throw out some ideas: After EquationSystems::init() is called, all of the Systems' DofMap objects have had their chance to walk all over the Mesh, and then in most cases every processor should stick to its local and ghost elements until it's time to do adaptive refinement - am I right about that? If so, then codes which aren't using AMR/C (and aren't using MeshFunction, etc) could call Mesh::parallelize() at this time, and all "parallelize()" would have to do would be to delete a lot of Elem and Node objects. Even for codes which are using adaptive refinement, would it be a good stopgap solution to temporarily serialize the Mesh during EquationSystems::reinit() calls? The sequence would go something like: 1. Delete System matrices (which are invalid anyway since we're changing the mesh), making room for a serial Mesh 2. Serialize the Mesh (which might not be much harder than the code in Mesh::read; there's just lots of tricks like remembering to copy old_dof_objects) 3. Repartition the mesh, call System::project_vector, etc. 4. Parallelize the Mesh Of course, this won't work as is (the MeshRefinement flagging schemes assume a serial Mesh, for instance), but it might be a good place to start without creating a huge new ParallelMesh class or breaking existing code. If anyone has a better name suggestion for Mesh::parallelize() (perhaps Mesh::raze()?) or any thoughts to add, let me know. --- Roy |
From: Derek Gaston <friedmud@gm...> - 2007-07-09 22:44:02
|
Sigh... that's what I was afraid of. Damn... what a huge pain. For some reason I can't get the same large solutions to solve on my big Solaris machine here... they all segfault at some point. Damn, Damn, Damn, Damn, Damn. Thanks for the quick response... Derek On 7/9/07, John Peterson <peterson@...> wrote: > Derek Gaston writes: > > John, > > > > Sorry to dredge up an old email... but I'm wondering if you have any > > idea how much error there is in this process? Are the full double > > precision numbers being output to the GMV file and read back in? > > AFAIK all our GMV files are written single precision. So, you only > read in GMVs if it's your last resort ;-( I was able to get semi-good > postprocessed flux and volume integrals (say within 5-10% of true > value) but you can't use it for anything like convergence plots, > unfortunately. > > > I'm trying to re-use some runs I made last year... I have xda's for > > the mesh, and xdr's for the solutions... but for whatever reason those > > xdr's are corrupt (I remember that they were corrupt when they first > > came out too...). I also have gmv's and plt's of the same junk. I > > would _love_ to re-use these solutions as they are on millions of > > elements (took a long time to get, even on lonestar). But I'm going > > to be using them to drive error and indicator calculations so I need > > them to be precise. > > > > Thanks, > > Derek > > > > On 6/5/07, John Peterson <peterson@...> wrote: > > > Hi, > > > > > > It's possible to read in a GMV file and solutions. > > > Sample code: > > > > > > { > > > Mesh mesh(dim); > > > GMVIO gmvio(mesh); > > > gmvio.read("meshfile.gmv"); > > > } > > > > > > If there were solution fields in the GMV file they > > > were read in as well and are now stored in the GMV > > > object. Suppose you later created an EquationSystems > > > object based on this mesh: > > > > > > { > > > EquationSystems es(mesh); > > > > > > TransientLinearImplicitSystem & heat_system = > > > es.add_system<TransientLinearImplicitSystem> ("HeatSystem"); > > > > > > heat_system.add_variable ("T", FIRST); > > > > > > es.init(); > > > } > > > > > > Then, you can copy the solution you read in > > > from the GMV file to the EquationSystems object: > > > > > > { > > > gmvio.copy_nodal_solution(es); > > > } > > > > > > I did it in stages like this because you can't initialize the > > > EquationSystems without the mesh, and you can't easily put the > > > solution data into the EquationSystems object before it has been > > > initialized. > > > > > > Note that there isn't much information stored in a GMV file: only > > > nodal (or cell, which I didn't support yet) data values, so this > > > feature is limited to reading in linear, Lagrange elements (sorry > > > Roy!). > > > > > > If you wrote out a quadratic solution to a GMV file, it was probably > > > written out as linear sub-elements, so this is what will be read back > > > in, not your nice original quadratic solution. It will also probably > > > completely break on non-conforming (AMR) GMV files, I haven't even > > > tested that yet. > > > > > > I mainly wrote this was to be able to post-process uniform GMV files > > > for which I forgot to also write out the xda files. I hope it will > > > be useful for that, and maybe other stuff. > > > > > > -John > > > > > > > > > ------------------------------------------------------------------------- > > > This SF.net email is sponsored by DB2 Express > > > Download DB2 Express C - the FREE version of DB2 express and take > > > control of your XML. No limits. Just data. Click to get it now. > > > http://sourceforge.net/powerbar/db2/ > > > _______________________________________________ > > > Libmesh-devel mailing list > > > Libmesh-devel@... > > > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > > > > |
From: John Peterson <peterson@cf...> - 2007-07-09 22:42:32
|
Derek Gaston writes: > John, > > Sorry to dredge up an old email... but I'm wondering if you have any > idea how much error there is in this process? Are the full double > precision numbers being output to the GMV file and read back in? AFAIK all our GMV files are written single precision. So, you only read in GMVs if it's your last resort ;-( I was able to get semi-good postprocessed flux and volume integrals (say within 5-10% of true value) but you can't use it for anything like convergence plots, unfortunately. > I'm trying to re-use some runs I made last year... I have xda's for > the mesh, and xdr's for the solutions... but for whatever reason those > xdr's are corrupt (I remember that they were corrupt when they first > came out too...). I also have gmv's and plt's of the same junk. I > would _love_ to re-use these solutions as they are on millions of > elements (took a long time to get, even on lonestar). But I'm going > to be using them to drive error and indicator calculations so I need > them to be precise. > > Thanks, > Derek > > On 6/5/07, John Peterson <peterson@...> wrote: > > Hi, > > > > It's possible to read in a GMV file and solutions. > > Sample code: > > > > { > > Mesh mesh(dim); > > GMVIO gmvio(mesh); > > gmvio.read("meshfile.gmv"); > > } > > > > If there were solution fields in the GMV file they > > were read in as well and are now stored in the GMV > > object. Suppose you later created an EquationSystems > > object based on this mesh: > > > > { > > EquationSystems es(mesh); > > > > TransientLinearImplicitSystem & heat_system = > > es.add_system<TransientLinearImplicitSystem> ("HeatSystem"); > > > > heat_system.add_variable ("T", FIRST); > > > > es.init(); > > } > > > > Then, you can copy the solution you read in > > from the GMV file to the EquationSystems object: > > > > { > > gmvio.copy_nodal_solution(es); > > } > > > > I did it in stages like this because you can't initialize the > > EquationSystems without the mesh, and you can't easily put the > > solution data into the EquationSystems object before it has been > > initialized. > > > > Note that there isn't much information stored in a GMV file: only > > nodal (or cell, which I didn't support yet) data values, so this > > feature is limited to reading in linear, Lagrange elements (sorry > > Roy!). > > > > If you wrote out a quadratic solution to a GMV file, it was probably > > written out as linear sub-elements, so this is what will be read back > > in, not your nice original quadratic solution. It will also probably > > completely break on non-conforming (AMR) GMV files, I haven't even > > tested that yet. > > > > I mainly wrote this was to be able to post-process uniform GMV files > > for which I forgot to also write out the xda files. I hope it will > > be useful for that, and maybe other stuff. > > > > -John > > > > > > ------------------------------------------------------------------------- > > This SF.net email is sponsored by DB2 Express > > Download DB2 Express C - the FREE version of DB2 express and take > > control of your XML. No limits. Just data. Click to get it now. > > http://sourceforge.net/powerbar/db2/ > > _______________________________________________ > > Libmesh-devel mailing list > > Libmesh-devel@... > > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > > |
From: Derek Gaston <friedmud@gm...> - 2007-07-09 22:38:49
|
John, Sorry to dredge up an old email... but I'm wondering if you have any idea how much error there is in this process? Are the full double precision numbers being output to the GMV file and read back in? I'm trying to re-use some runs I made last year... I have xda's for the mesh, and xdr's for the solutions... but for whatever reason those xdr's are corrupt (I remember that they were corrupt when they first came out too...). I also have gmv's and plt's of the same junk. I would _love_ to re-use these solutions as they are on millions of elements (took a long time to get, even on lonestar). But I'm going to be using them to drive error and indicator calculations so I need them to be precise. Thanks, Derek On 6/5/07, John Peterson <peterson@...> wrote: > Hi, > > It's possible to read in a GMV file and solutions. > Sample code: > > { > Mesh mesh(dim); > GMVIO gmvio(mesh); > gmvio.read("meshfile.gmv"); > } > > If there were solution fields in the GMV file they > were read in as well and are now stored in the GMV > object. Suppose you later created an EquationSystems > object based on this mesh: > > { > EquationSystems es(mesh); > > TransientLinearImplicitSystem & heat_system = > es.add_system<TransientLinearImplicitSystem> ("HeatSystem"); > > heat_system.add_variable ("T", FIRST); > > es.init(); > } > > Then, you can copy the solution you read in > from the GMV file to the EquationSystems object: > > { > gmvio.copy_nodal_solution(es); > } > > I did it in stages like this because you can't initialize the > EquationSystems without the mesh, and you can't easily put the > solution data into the EquationSystems object before it has been > initialized. > > Note that there isn't much information stored in a GMV file: only > nodal (or cell, which I didn't support yet) data values, so this > feature is limited to reading in linear, Lagrange elements (sorry > Roy!). > > If you wrote out a quadratic solution to a GMV file, it was probably > written out as linear sub-elements, so this is what will be read back > in, not your nice original quadratic solution. It will also probably > completely break on non-conforming (AMR) GMV files, I haven't even > tested that yet. > > I mainly wrote this was to be able to post-process uniform GMV files > for which I forgot to also write out the xda files. I hope it will > be useful for that, and maybe other stuff. > > -John > > > ------------------------------------------------------------------------- > This SF.net email is sponsored by DB2 Express > Download DB2 Express C - the FREE version of DB2 express and take > control of your XML. No limits. Just data. Click to get it now. > http://sourceforge.net/powerbar/db2/ > _______________________________________________ > Libmesh-devel mailing list > Libmesh-devel@... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > |
From: Ondrej Certik <ondrej@ce...> - 2007-07-08 23:18:00
|
> the Debian package is still in the NEW queue, but hopefully they will > soon look at it and upload it into the archive. In the meantime, I They probably heard me and just uploaded a few minutes ago. :) It will be in Debian unstable tomorrow or the day after tomorrow. And it will get to Ubuntu usually a day or two after that. Ondrej |
From: John Peterson <peterson@cf...> - 2007-07-08 21:18:08
|
Ondrej Certik writes: > > DistributedVector (whose existence I'd forgotten about - thanks, John) > > derives from PetscVector directly too, doesn't it? Would it be a > > DistributedVector derives from NumericVector, at least in 0.6.0-rc2. Yes, AFAIK it has always derived from the abstract NumericVector base. > > partial fix if NumericVector::build() created a DistributedVector by > > default instead of just exiting with an error? > > There are actually two places in the system that exit with the solver > error and I tried to return NULL, which obviously segfaulted. :) But > maybe returning the DistributedVector would do the job. I hope it works. This is also a good opportunity (for anyone out there who wants to) to actually *write* a LibMeshSparseMatrix/DistributedMatrix class and LibMeshLinearSolver. Serial is fine at first, we can work on parallel later ;) -John |
From: Ondrej Certik <ondrej@ce...> - 2007-07-08 21:03:05
|
> DistributedVector (whose existence I'd forgotten about - thanks, John) > derives from PetscVector directly too, doesn't it? Would it be a DistributedVector derives from NumericVector, at least in 0.6.0-rc2. > partial fix if NumericVector::build() created a DistributedVector by > default instead of just exiting with an error? There are actually two places in the system that exit with the solver error and I tried to return NULL, which obviously segfaulted. :) But maybe returning the DistributedVector would do the job. Ondrej |
From: Roy Stogner <roystgnr@ic...> - 2007-07-08 20:33:31
|
On Sun, 8 Jul 2007, Ondrej Certik wrote: >> DistributedVector (include/numerics/distributed_vector.h) should be a >> working built-in implementation of NumericVector >> that we can use for the "Dummy" solver package. > > The LaspackVector actually derives from the NumericVector directly. > But I guess any solution would be fine. DistributedVector (whose existence I'd forgotten about - thanks, John) derives from PetscVector directly too, doesn't it? Would it be a partial fix if NumericVector::build() created a DistributedVector by default instead of just exiting with an error? --- Roy |
From: Ondrej Certik <ondrej@ce...> - 2007-07-08 20:06:25
|
> > Even adding a system in the first place may be a problem, because the > > System class will try to create a NumericVector for it's solution - > > it doesn't matter that you're never going to be solving with that > > vector, because the data structures for NumericVector subclasses > > depend on what solver you expect to hand them to later. I'm afraid > > the only fix may be to add a "Dummy" subclass like you suggested > > except that it's not a dummy linear solver you need, it's dummy > > NumericVector (and if you insist on creating ImplicitSystem objects, > > SparseMatrix) instantiations. Exactly, that's what I did in the debian package. I just made the laspack vector and laspack matrix a dummy vector and a dummy matrix (only I used the word laspack instead of dummy). > DistributedVector (include/numerics/distributed_vector.h) should be a > working built-in implementation of NumericVector > that we can use for the "Dummy" solver package. The LaspackVector actually derives from the NumericVector directly. But I guess any solution would be fine. Ondrej |
From: John Peterson <peterson@cf...> - 2007-07-08 18:56:51
|
Roy Stogner writes: > On Sun, 8 Jul 2007, Ondrej Certik wrote: > > >> "./configure --disable-petsc --disable-laspack" compiles for me now, > >> although naturally most of the example codes fail at runtime when they > >> try to build numeric vectors of type "INVALID_SOLVER_PACKAGE". > > > > The fix which you are probably talking about is just adding a missing > > #include, so that the compiler doesn't stop on AutoPtr error. But the > > real problem is that even though I am not calling "solve" in libmesh, > > it still fails on INVALID_SOLVER_PACKAGE, becuase I need to use > > equation_systems, because I am then calling > > > > equation_systems.get_system<LinearImplicitSystem>("Poisson").get_dof_map() > > > > etc. If you know a way how around it, I am interested (I posted the > > relevant code below). > > Even adding a system in the first place may be a problem, because the > System class will try to create a NumericVector for it's solution - > it doesn't matter that you're never going to be solving with that > vector, because the data structures for NumericVector subclasses > depend on what solver you expect to hand them to later. I'm afraid > the only fix may be to add a "Dummy" subclass like you suggested > except that it's not a dummy linear solver you need, it's dummy > NumericVector (and if you insist on creating ImplicitSystem objects, > SparseMatrix) instantiations. DistributedVector (include/numerics/distributed_vector.h) should be a working built-in implementation of NumericVector that we can use for the "Dummy" solver package. -John |
From: Roy Stogner <roystgnr@ic...> - 2007-07-08 18:49:33
|
On Sun, 8 Jul 2007, Ondrej Certik wrote: >> "./configure --disable-petsc --disable-laspack" compiles for me now, >> although naturally most of the example codes fail at runtime when they >> try to build numeric vectors of type "INVALID_SOLVER_PACKAGE". > > The fix which you are probably talking about is just adding a missing > #include, so that the compiler doesn't stop on AutoPtr error. But the > real problem is that even though I am not calling "solve" in libmesh, > it still fails on INVALID_SOLVER_PACKAGE, becuase I need to use > equation_systems, because I am then calling > > equation_systems.get_system<LinearImplicitSystem>("Poisson").get_dof_map() > > etc. If you know a way how around it, I am interested (I posted the > relevant code below). Even adding a system in the first place may be a problem, because the System class will try to create a NumericVector for it's solution - it doesn't matter that you're never going to be solving with that vector, because the data structures for NumericVector subclasses depend on what solver you expect to hand them to later. I'm afraid the only fix may be to add a "Dummy" subclass like you suggested except that it's not a dummy linear solver you need, it's dummy NumericVector (and if you insist on creating ImplicitSystem objects, SparseMatrix) instantiations. --- Roy |
From: Ondrej Certik <ondrej@ce...> - 2007-07-08 18:37:34
|
> Does the CVS libMesh still need such a patch? I think it does, see below. > "./configure --disable-petsc --disable-laspack" compiles for me now, > although naturally most of the example codes fail at runtime when they > try to build numeric vectors of type "INVALID_SOLVER_PACKAGE". The fix which you are probably talking about is just adding a missing #include, so that the compiler doesn't stop on AutoPtr error. But the real problem is that even though I am not calling "solve" in libmesh, it still fails on INVALID_SOLVER_PACKAGE, becuase I need to use equation_systems, because I am then calling equation_systems.get_system<LinearImplicitSystem>("Poisson").get_dof_map() etc. If you know a way how around it, I am interested (I posted the relevant code below). > This should be possible... I would probably set it up as > > ./configure --disable-laspack --disable-petsc > > It would also give useful error messages when someone tries to > actually *solve* after configuring petsc and laspack off (which just > happened recently). Unfortunately, it gives an error message even during the calculation the elements matrices. > My initial thought is to create a concrete "Dummy" LinearSolver > class and SolverPackage enum which throws errors for all the pure > virtual functions in the LinearSolver interface. My thought is to create a concrete Dummy LinearSolver, that will do nothing. This is exactly what I am doing, only I used the laspack interface for the dummy solver. The other option is to fix libmesh, so that it doesn't call the sparse things when it is not needed. Ondrej code for the element matrices: ----------------------------------- void mesh(const std::string& fmesh, const std::string& fmatrices, const std::string& fboundaries, double* bvalues, int vsize, int* bidx, int isize, double* lambda, int lsize, Updater *up) { char *p[1]={"./lmesh"}; int argc=1; char **argv=p; libMesh::init(argc, argv); { Mesh mesh(3); mesh.read(fmesh); mesh.find_neighbors(); int linear=mesh.elem(0)->type()==TET4; EquationSystems equation_systems (mesh); equation_systems.add_system<LinearImplicitSystem> ("Poisson"); if (linear) equation_systems.get_system("Poisson").add_variable("u", FIRST); else equation_systems.get_system("Poisson").add_variable("u", SECOND); equation_systems.init(); const unsigned int dim = mesh.mesh_dimension(); LinearImplicitSystem& system= equation_systems.get_system<LinearImplicitSystem>("Poisson"); const DofMap& dof_map = system.get_dof_map(); FEType fe_type = dof_map.variable_type(0); AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); QGauss qrule (dim, FIFTH); fe->attach_quadrature_rule (&qrule); AutoPtr<FEBase> fe_face (FEBase::build(dim, fe_type)); QGauss qface(dim-1, FIFTH); fe_face->attach_quadrature_rule (&qface); const std::vector<Real>& JxW = fe->get_JxW(); const std::vector<std::vector<Real> >& phi = fe->get_phi(); const std::vector<std::vector<RealGradient> >& dphi = fe->get_dphi(); DenseMatrix<Number> Ke; DenseVector<Number> Fee; std::vector<unsigned int> dof_indices; BC bc(fboundaries.c_str(),bvalues,bidx,isize); matrices mymatrices(fmatrices.c_str()); mymatrices.setsize(mesh.n_nodes(),mesh.n_elem(), linear); unsigned int nodemap[mesh.n_nodes()]; for (unsigned int i=0;i<mesh.n_nodes();i++) nodemap[mesh.node(i).dof_number(0,0,0)]=i; MeshBase::const_element_iterator el = mesh.elements_begin(); const MeshBase::const_element_iterator end_el = mesh.elements_end(); if (up!=NULL) up->init(mesh.n_elem()-1); for ( ; el != end_el ; ++el) { const Elem* elem = *el; if (up!=NULL) up->update(elem->id()); dof_map.dof_indices (elem, dof_indices); //std::cout << dof_indices.size() << " " << // elem->type() << std::endl; fe->reinit (elem); Ke.resize (dof_indices.size(), dof_indices.size()); Fee.resize (dof_indices.size()); for (unsigned int qp=0; qp<qrule.n_points(); qp++) { Real lam=lambda[elem->id()]; for (unsigned int i=0; i<phi.size(); i++) for (unsigned int j=0; j<phi.size(); j++) Ke(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp])*lam; } { int b,s; double bval; if (bc.find2(elem->id()+1,&b,&s,&bval)) for (unsigned int side=0; side<elem->n_sides(); side++) if ((side+1==(unsigned int)s) ) { if (elem->neighbor(side) != NULL) error(); const std::vector<std::vector<Real> >& phi_face=fe_face->get_phi(); const std::vector<Real>& JxW_face = fe_face->get_JxW(); fe_face->reinit(elem, side); Real value=bval; for (unsigned int qp=0; qp<qface.n_points(); qp++) { const Real penalty = 1.e10; for (unsigned int i=0; i<phi_face.size(); i++) for (unsigned int j=0; j<phi_face.size(); j++) Ke(i,j) += JxW_face[qp]* penalty*phi_face[i][qp]*phi_face[j][qp]; for (unsigned int i=0; i<phi_face.size(); i++) Fee(i) += JxW_face[qp]*penalty*value*phi_face[i][qp]; } } } mymatrices.addtoA(Ke,dof_indices,nodemap); mymatrices.addtoF(Fee); } //for element } libMesh::close(); } |
From: John Peterson <peterson@cf...> - 2007-07-08 17:40:06
|
Roy Stogner writes: > On Sun, 8 Jul 2007, Ondrej Certik wrote: > > > So to also ask some question - how about adding a special option to > > configure of libmesh to configure and compile without petsc and > > laspack? So that I don't have to patch the sources with each libmesh > > release. > > Does the CVS libMesh still need such a patch? > "./configure --disable-petsc --disable-laspack" compiles for me now, > although naturally most of the example codes fail at runtime when they > try to build numeric vectors of type "INVALID_SOLVER_PACKAGE". Oh, I wasn't aware that you had fixed the compilation issues with no valid Solver packages defined. -J |
From: John Peterson <peterson@cf...> - 2007-07-08 16:56:26
|
Ondrej Certik writes: > Hi, > > the Debian package is still in the NEW queue, but hopefully they will > soon look at it and upload it into the archive. In the meantime, I > already created an updated version, which can be downloaded from: > > http://debian.wgdd.de/debian/ > > just put > > deb http://debian.wgdd.de/debian unstable main contrib non-free > > into your /etc/apt/sources.list. I however need to use libmesh without > petsc, because I want to solve the matrices myself (using > python-petsc) and the problem is, that the MPI makes it impossible to > work with python-petsc and libmesh with petsc at the same time (it > shouts about calling MPI functions after MPI_Finalize). I used to > solve this problem by configuring libmesh against laspack. But since > laspack is not free I created these binary libmesh packages: > > libmesh0.6.0-dev > libmesh0.6.0-pure-dev > libmesh0.6.0 > libmesh0.6.0-pure > > the -dev packages contain header files, the other just the runtime .so > library. the "libmesh0.6.0-pure" packages contain libmesh configured > without petsc and laspack, the "libmesh0.6.0" packages are linked > against petsc. > > How I achieved that is that I patched the laspack C++ interface in > libmesh (I removed all calls to laspack), so that I configure libmesh > with laspack, but actually there is no laspack in it, so that it can > go into Debian main. The details can be checked here: > > http://debian.wgdd.de/debian/dists/unstable/main/source/libs/ > > The file libmesh_0.6.0~rc2.dfsg.orig.tar.gz is just the libmesh rc2 > release (I didn't yet update the package to work with the latest > libmesh release) with all non-free stuff deleted. The file > libmesh_0.6.0~rc2.dfsg-1oc3.diff.gz contains all the debian specific > stuff - commands how to configure && compile && install libmesh and my > patches to mock up laspack. > > So those people like me, who just need the nice finite elements code > from libmesh will install the libmesh0.6.0-pure-dev, the other poeple > who need the full blown libmesh linked against petsc will install the > libmesh0.6.0-dev package (instructions how to compile examples are in > README.Debian), but it's very easy: > > g++ -I/usr/include/libmesh -I/usr/include/mpi -I/usr/include/petsc -c > -o ex2.o ex2.C > g++ -o ex2 ex2.o -lmesh -lpetsc -lpetscdm -lpetscksp -lpetscmat > -lpetscsnes -lpetscts -lpetscvec > > Actually, technically the packages at debian.wgdd.de are linked > against petsc linked against LAM (there is a special package for that > in that repository), that's because python-petsc doesn't work > properly, when linked against petsc with MPICH. This will be resolved > with the petsc maintainer in Debian, so you don't have to worry - > whatever you install it from above or from Debian (once libmesh gets > into the unstable) it will work out of the box. Only python-petsc > works fine when installed from above, but there are problems with the > one in Debian. > > So to also ask some question - how about adding a special option to > configure of libmesh to configure and compile without petsc and > laspack? So that I don't have to patch the sources with each libmesh > release. This should be possible... I would probably set it up as ./configure --disable-laspack --disable-petsc It would also give useful error messages when someone tries to actually *solve* after configuring petsc and laspack off (which just happened recently). My initial thought is to create a concrete "Dummy" LinearSolver class and SolverPackage enum which throws errors for all the pure virtual functions in the LinearSolver interface. Another option would be to make LinearSolver a non-abstract base, but I think this option is less preferable from a "good C++ practices" point-of-view. -John |
From: Roy Stogner <roystgnr@ic...> - 2007-07-08 16:54:17
|
On Sun, 8 Jul 2007, Ondrej Certik wrote: > So to also ask some question - how about adding a special option to > configure of libmesh to configure and compile without petsc and > laspack? So that I don't have to patch the sources with each libmesh > release. Does the CVS libMesh still need such a patch? "./configure --disable-petsc --disable-laspack" compiles for me now, although naturally most of the example codes fail at runtime when they try to build numeric vectors of type "INVALID_SOLVER_PACKAGE". --- Roy |