You can subscribe to this list here.
2003 |
Jan
(4) |
Feb
(1) |
Mar
(9) |
Apr
(2) |
May
(7) |
Jun
(1) |
Jul
(1) |
Aug
(4) |
Sep
(12) |
Oct
(8) |
Nov
(3) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(21) |
Mar
(31) |
Apr
(10) |
May
(12) |
Jun
(15) |
Jul
(4) |
Aug
(6) |
Sep
(5) |
Oct
(11) |
Nov
(43) |
Dec
(13) |
2005 |
Jan
(25) |
Feb
(12) |
Mar
(49) |
Apr
(19) |
May
(104) |
Jun
(60) |
Jul
(10) |
Aug
(42) |
Sep
(15) |
Oct
(12) |
Nov
(6) |
Dec
(4) |
2006 |
Jan
(1) |
Feb
(6) |
Mar
(31) |
Apr
(17) |
May
(5) |
Jun
(95) |
Jul
(38) |
Aug
(44) |
Sep
(6) |
Oct
(8) |
Nov
(21) |
Dec
|
2007 |
Jan
(5) |
Feb
(46) |
Mar
(9) |
Apr
(23) |
May
(17) |
Jun
(51) |
Jul
(41) |
Aug
(4) |
Sep
(28) |
Oct
(71) |
Nov
(193) |
Dec
(20) |
2008 |
Jan
(46) |
Feb
(46) |
Mar
(18) |
Apr
(38) |
May
(14) |
Jun
(107) |
Jul
(50) |
Aug
(115) |
Sep
(84) |
Oct
(96) |
Nov
(105) |
Dec
(34) |
2009 |
Jan
(89) |
Feb
(93) |
Mar
(119) |
Apr
(73) |
May
(39) |
Jun
(51) |
Jul
(27) |
Aug
(8) |
Sep
(91) |
Oct
(90) |
Nov
(77) |
Dec
(67) |
2010 |
Jan
(25) |
Feb
(36) |
Mar
(98) |
Apr
(45) |
May
(25) |
Jun
(60) |
Jul
(17) |
Aug
(36) |
Sep
(48) |
Oct
(45) |
Nov
(65) |
Dec
(39) |
2011 |
Jan
(26) |
Feb
(48) |
Mar
(151) |
Apr
(108) |
May
(61) |
Jun
(108) |
Jul
(27) |
Aug
(50) |
Sep
(43) |
Oct
(43) |
Nov
(27) |
Dec
(37) |
2012 |
Jan
(56) |
Feb
(120) |
Mar
(72) |
Apr
(57) |
May
(82) |
Jun
(66) |
Jul
(51) |
Aug
(75) |
Sep
(166) |
Oct
(232) |
Nov
(284) |
Dec
(105) |
2013 |
Jan
(168) |
Feb
(151) |
Mar
(30) |
Apr
(145) |
May
(26) |
Jun
(53) |
Jul
(76) |
Aug
(33) |
Sep
(23) |
Oct
(72) |
Nov
(125) |
Dec
(38) |
2014 |
Jan
(47) |
Feb
(62) |
Mar
(27) |
Apr
(8) |
May
(12) |
Jun
(2) |
Jul
(22) |
Aug
(22) |
Sep
|
Oct
(17) |
Nov
(20) |
Dec
(12) |
2015 |
Jan
(25) |
Feb
(2) |
Mar
(16) |
Apr
(13) |
May
(21) |
Jun
(5) |
Jul
(1) |
Aug
(8) |
Sep
(9) |
Oct
(30) |
Nov
(8) |
Dec
|
2016 |
Jan
(16) |
Feb
(31) |
Mar
(43) |
Apr
(18) |
May
(21) |
Jun
(11) |
Jul
(17) |
Aug
(26) |
Sep
(4) |
Oct
(16) |
Nov
(5) |
Dec
(6) |
2017 |
Jan
(1) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(1) |
Jun
(11) |
Jul
(5) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
2018 |
Jan
(8) |
Feb
(8) |
Mar
(1) |
Apr
|
May
(5) |
Jun
(11) |
Jul
|
Aug
(51) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(2) |
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: John P. <jwp...@gm...> - 2020-05-01 15:24:10
|
This email list has outlived its usefulness, as almost all significant developer discussion now takes place on issues and PRs at github.com/libmesh/libmesh. You can still subscribe to lib...@li... for the time being, but our plan is to transition that list over to Google Groups, possibly with a different name. More info will be posted on lib...@li... when it becomes available. -- John |
From: Paul T. B. <ptb...@gm...> - 2019-10-18 15:01:21
|
(Apologies if you receive multiple copies of this announcement.) My group at AMD will be presenting tutorials on the HIP programming language during SC19 in Denver, CO. HIP is AMD's GPU programming language, very similar to CUDA. Please see the following link below for the official advertisement (sorry for the link, the ad is ~1MB pdf). https://drive.google.com/open?id=1to5zo7QtsfClI98FXV3h9bGiKHF8eksu If you have questions, feel free to contact me at pau...@am.... Otherwise, please use the point-of-contact in the ad if you'd like to reserve a spot at one of the tutorials. Hope to see you folks in Denver. Thanks, Paul |
From: Kaustubh K. <kkh...@sd...> - 2019-10-04 00:55:16
|
Great, thanks! Get Outlook for Android<https://aka.ms/ghei36> ________________________________ From: Alexander Lindsay <ale...@gm...> Sent: Thursday, October 3, 2019 5:27:08 PM To: Kaustubh Khedkar <kkh...@sd...> Cc: lib...@li... <lib...@li...> Subject: Re: [Libmesh-devel] Reading a .msh file from Gambit into libmesh Yes libMesh does read .msh format. On Oct 3, 2019, at 12:42 PM, Kaustubh Khedkar <kkh...@sd...<mailto:kkh...@sd...>> wrote: Hi, I want to input a .msh file generated from Gambit software into libmesh. Does libmesh read .msh input mesh files? If it does not then which format do I need to export from Gambit to input it to libmesh? _______________________________________________ Libmesh-devel mailing list Lib...@li...<mailto:Lib...@li...> https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: Alexander L. <ale...@gm...> - 2019-10-04 00:27:17
|
Yes libMesh does read .msh format. > On Oct 3, 2019, at 12:42 PM, Kaustubh Khedkar <kkh...@sd...> wrote: > > Hi, > I want to input a .msh file generated from Gambit software into libmesh. Does libmesh read .msh input mesh files? If it does not then which format do I need to export from Gambit to input it to libmesh? > _______________________________________________ > Libmesh-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: Kaustubh K. <kkh...@sd...> - 2019-10-04 00:17:11
|
Hi, I want to input a .msh file generated from Gambit software into libmesh. Does libmesh read .msh input mesh files? If it does not then which format do I need to export from Gambit to input it to libmesh? |
From: Stogner, R. H <roy...@ic...> - 2019-07-16 15:42:10
|
On Tue, 16 Jul 2019, Alexander Lindsay wrote: > Actually there is no difference in the libraries being linked to; > that was an erroneous report. Both MOOSE and the libmesh example > link to both libhdf5_hl and libhdf5. (I'm still curious why both > libraries get linked when only -lhdf5 was specified; I clearly don't > know enough about this.) At least via libtool it's possible for libraries to specify their own dependencies, so that instead of having to specify "-lFoo -lEverything -lThat -lIt -lDepends -lOn" you can just say "-lFoo" and let the linker sort it out. I see hdf5_hl mentioned in contrib/ NetCDF and ExodusII build files; maybe they're doing something of that sort? > I guess this only makes it more puzzling to me why for the same > arguments to `exII::ex_create`, I get an error return code for MOOSE > and not for the libmesh example. Yeah, I have no ideas here. --- Roy |
From: Alexander L. <ale...@gm...> - 2019-07-16 03:52:49
|
Actually there is no difference in the libraries being linked to; that was an erroneous report. Both MOOSE and the libmesh example link to both libhdf5_hl and libhdf5. (I'm still curious why both libraries get linked when only -lhdf5 was specified; I clearly don't know enough about this.) I guess this only makes it more puzzling to me why for the same arguments to `exII::ex_create`, I get an error return code for MOOSE and not for the libmesh example. On Mon, Jul 15, 2019 at 8:33 PM Alexander Lindsay <ale...@gm...> wrote: > This is some interesting stuff. Ok here's stepping through the > system_of_equations example 3, which calls exodus file writing methods: > > (lldb) > Process 45495 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = step over > frame #0: 0x00000001008b039c > libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x000000010d01b800, > filename="out.e") at exodusII_io_helper.C:1207 > 1204 mode |= EX_NOCLASSIC; > 1205 #endif > 1206 > -> 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, > &io_ws); > 1208 > 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); > 1210 > Target 0: (example-dbg) stopped. > (lldb) p mode > (int) $1 = 584 > (lldb) p comp_ws > (int) $2 = 8 > (lldb) p io_ws > (int) $3 = 8 > (lldb) p filename.c_str() > (const std::__1::basic_string<char, std::__1::char_traits<char>, > std::__1::allocator<char> >::value_type *) $4 = 0x00007ffeefbfdc29 "out.e" > (lldb) n > Process 45495 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = step over > frame #0: 0x00000001008b03cb > libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x000000010d01b800, > filename="out.e") at exodusII_io_helper.C:1209 > 1206 > 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, > &io_ws); > 1208 > -> 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); > 1210 > 1211 if (verbose) > 1212 libMesh::out << "File created successfully." << std::endl; > Target 0: (example-dbg) stopped. > (lldb) p ex_id > (int) $5 = 65536 > > And then here is MOOSE's simple_diffusion: > > (lldb) > Process 45477 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = step over > frame #0: 0x000000010758d39c > libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x0000000111831e00, > filename="simple_diffusion_out.e") at exodusII_io_helper.C:1207 > 1204 mode |= EX_NOCLASSIC; > 1205 #endif > 1206 > -> 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, > &io_ws); > 1208 > 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); > 1210 > Target 0: (moose_test-dbg) stopped. > (lldb) p mode > (int) $0 = 584 > (lldb) p comp_ws > (int) $1 = 8 > (lldb) p io_ws > (int) $2 = 8 > (lldb) p filename.c_str() > (const std::__1::basic_string<char, std::__1::char_traits<char>, > std::__1::allocator<char> >::value_type *) $3 = 0x00007ffeefbfdbb9 > "simple_diffusion_out.e" > (lldb) n > Exodus Library Error: [ex_create] > Error: file create failed for simple_diffusion_out.e in NETCDF4 and > CLOBBER mode. > This library probably does not support netcdf-4 files. > exerrval = 13 > Process 45477 stopped > * thread #1, queue = 'com.apple.main-thread', stop reason = step over > frame #0: 0x000000010758d3cb > libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x0000000111831e00, > filename="simple_diffusion_out.e") at exodusII_io_helper.C:1209 > 1206 > 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, > &io_ws); > 1208 > -> 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); > 1210 > 1211 if (verbose) > 1212 libMesh::out << "File created successfully." << std::endl; > Target 0: (moose_test-dbg) stopped. > (lldb) p ex_id > (int) $4 = -1 > > The arguments into exII::ex_create (except for the name of course) are the > exact same!! > > So if I look into the hdf5 libraries that the different executables are > dynamically linking to I get different libraries: > > $ otool -L moose_test-dbg | grep hdf5 > /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5_hl.100.dylib > (compatibility version 102.0.0, current version 102.2.0) > /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5.103.dylib > (compatibility version 105.0.0, current version 105.0.0) > > $ otool -L example-dbg | grep hdf5 > /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5_hl.100.dylib > (compatibility version 102.0.0, current version 102.2.0) > > The moose executable is somehow linking to multiple hdf5 libraries, one > the traditional library and the other the "high level" library. The libmesh > executable only links to the high level. This is despite the fact that they > are using the exact same linking flags: > > -Wl,-rpath,/Users/lindad/projects/moose/scripts/../libmesh/installed-hdf5-1.10.5/lib > -L/Users/lindad/projects/moose/scripts/../libmesh/installed-hdf5-1.10.5/lib > -lmesh_dbg -L/Users/lindad/hdf5/installed-1.10.5/lib -lhdf5 > -Wl,-rpath,/Users/lindad/hdf5/installed-1.10.5/lib > -L/Users/lindad/projects/moose/petsc/arch-moose/lib > -Wl,-rpath,/Users/lindad/projects/moose/petsc/arch-moose/lib > -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib > -Wl,-rpath,/opt/moose/mpich-3.3/clang-8.0.0/lib > -L/opt/moose/mpich-3.3/clang-8.0.0/lib -Wl,-rpath,/opt/moose/llvm-8.0.0/lib > -L/opt/moose/llvm-8.0.0/lib > -Wl,-rpath,/opt/moose/gcc-8.3.0/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 > -L/opt/moose/gcc-8.3.0/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 > -Wl,-rpath,/opt/moose/gcc-8.3.0/lib -L/opt/moose/gcc-8.3.0/lib > -Wl,-rpath,/opt/moose/llvm-8.0.0/lib/clang/8.0.0/lib/darwin > -L/opt/moose/llvm-8.0.0/lib/clang/8.0.0/lib/darwin -lpetsc -lHYPRE -lcmumps > -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist > -lflapack -lfblas -lparmetis -lmetis -lptesmumps -lptscotchparmetis > -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lX11 -lmpifort > -lgfortran -lgomp -lquadmath -lc++ -lmpicxx -lmpi -lpmpi -lomp > -lclang_rt.osx -lm -lz -lstdc++ -ldl > > I simply don't understand why the hdf5_hl library would get picked up in > either case. Shouldn't I have to specify -lhdf5_hl in order for that to > happen? > > > On Sun, Jul 14, 2019 at 5:10 PM John Peterson <jwp...@gm...> > wrote: > >> Regarding HDF versions, I can’t remember exactly what we are using at the >> moment but I think it’s 1.8.x? I will check and get back to you. >> >> On Jul 14, 2019, at 5:00 PM, John Peterson <jwp...@gm...> wrote: >> >> There isn’t a lot of logic behind what gets updated when beyond “as >> necessary”. The most recent netcdf update fixed an issue with writing hdf5 >> files for us. I didn’t update exodusII at the time since it there was no >> specific issue that needed to be addressed, but it is quite old relative to >> what’s available on github, so it should probably be done at some point. >> >> >> On Jul 14, 2019, at 4:35 PM, Alexander Lindsay <ale...@gm...> >> wrote: >> >> So I didn't get any replies to this >> <https://sourceforge.net/p/libmesh/mailman/message/36709086/>, but now >> I'm getting more curious. I'm curious why we fairly consistently update our >> netcdf version, but seem to never update our exodusii contrib? Are we sure >> that everything is compatible? I ran `make check` on my personal build of >> netcdf 4.6.2 with hdf5-1.10.5 and all tests pass, so I'm wondering whether >> the incompatibility may be at a higher level? Is our older exodusii source >> using "incorrect" APIs in the newer netcdf versions when hdf5 is available? >> >> I can look this up, but what does netcdf do differently when hdf5 is or >> is not enabled in our configure? >> >> _______________________________________________ >> Libmesh-devel mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-devel >> >> |
From: Alexander L. <ale...@gm...> - 2019-07-16 03:33:31
|
This is some interesting stuff. Ok here's stepping through the system_of_equations example 3, which calls exodus file writing methods: (lldb) Process 45495 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step over frame #0: 0x00000001008b039c libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x000000010d01b800, filename="out.e") at exodusII_io_helper.C:1207 1204 mode |= EX_NOCLASSIC; 1205 #endif 1206 -> 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, &io_ws); 1208 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); 1210 Target 0: (example-dbg) stopped. (lldb) p mode (int) $1 = 584 (lldb) p comp_ws (int) $2 = 8 (lldb) p io_ws (int) $3 = 8 (lldb) p filename.c_str() (const std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type *) $4 = 0x00007ffeefbfdc29 "out.e" (lldb) n Process 45495 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step over frame #0: 0x00000001008b03cb libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x000000010d01b800, filename="out.e") at exodusII_io_helper.C:1209 1206 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, &io_ws); 1208 -> 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); 1210 1211 if (verbose) 1212 libMesh::out << "File created successfully." << std::endl; Target 0: (example-dbg) stopped. (lldb) p ex_id (int) $5 = 65536 And then here is MOOSE's simple_diffusion: (lldb) Process 45477 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step over frame #0: 0x000000010758d39c libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x0000000111831e00, filename="simple_diffusion_out.e") at exodusII_io_helper.C:1207 1204 mode |= EX_NOCLASSIC; 1205 #endif 1206 -> 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, &io_ws); 1208 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); 1210 Target 0: (moose_test-dbg) stopped. (lldb) p mode (int) $0 = 584 (lldb) p comp_ws (int) $1 = 8 (lldb) p io_ws (int) $2 = 8 (lldb) p filename.c_str() (const std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> >::value_type *) $3 = 0x00007ffeefbfdbb9 "simple_diffusion_out.e" (lldb) n Exodus Library Error: [ex_create] Error: file create failed for simple_diffusion_out.e in NETCDF4 and CLOBBER mode. This library probably does not support netcdf-4 files. exerrval = 13 Process 45477 stopped * thread #1, queue = 'com.apple.main-thread', stop reason = step over frame #0: 0x000000010758d3cb libmesh_dbg.0.dylib`libMesh::ExodusII_IO_Helper::create(this=0x0000000111831e00, filename="simple_diffusion_out.e") at exodusII_io_helper.C:1209 1206 1207 ex_id = exII::ex_create(filename.c_str(), mode, &comp_ws, &io_ws); 1208 -> 1209 EX_CHECK_ERR(ex_id, "Error creating ExodusII mesh file."); 1210 1211 if (verbose) 1212 libMesh::out << "File created successfully." << std::endl; Target 0: (moose_test-dbg) stopped. (lldb) p ex_id (int) $4 = -1 The arguments into exII::ex_create (except for the name of course) are the exact same!! So if I look into the hdf5 libraries that the different executables are dynamically linking to I get different libraries: $ otool -L moose_test-dbg | grep hdf5 /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5_hl.100.dylib (compatibility version 102.0.0, current version 102.2.0) /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5.103.dylib (compatibility version 105.0.0, current version 105.0.0) $ otool -L example-dbg | grep hdf5 /Users/lindad/hdf5/build-1.10.5/../installed-1.10.5/lib/libhdf5_hl.100.dylib (compatibility version 102.0.0, current version 102.2.0) The moose executable is somehow linking to multiple hdf5 libraries, one the traditional library and the other the "high level" library. The libmesh executable only links to the high level. This is despite the fact that they are using the exact same linking flags: -Wl,-rpath,/Users/lindad/projects/moose/scripts/../libmesh/installed-hdf5-1.10.5/lib -L/Users/lindad/projects/moose/scripts/../libmesh/installed-hdf5-1.10.5/lib -lmesh_dbg -L/Users/lindad/hdf5/installed-1.10.5/lib -lhdf5 -Wl,-rpath,/Users/lindad/hdf5/installed-1.10.5/lib -L/Users/lindad/projects/moose/petsc/arch-moose/lib -Wl,-rpath,/Users/lindad/projects/moose/petsc/arch-moose/lib -Wl,-rpath,/opt/X11/lib -L/opt/X11/lib -Wl,-rpath,/opt/moose/mpich-3.3/clang-8.0.0/lib -L/opt/moose/mpich-3.3/clang-8.0.0/lib -Wl,-rpath,/opt/moose/llvm-8.0.0/lib -L/opt/moose/llvm-8.0.0/lib -Wl,-rpath,/opt/moose/gcc-8.3.0/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 -L/opt/moose/gcc-8.3.0/lib/gcc/x86_64-apple-darwin18.5.0/8.3.0 -Wl,-rpath,/opt/moose/gcc-8.3.0/lib -L/opt/moose/gcc-8.3.0/lib -Wl,-rpath,/opt/moose/llvm-8.0.0/lib/clang/8.0.0/lib/darwin -L/opt/moose/llvm-8.0.0/lib/clang/8.0.0/lib/darwin -lpetsc -lHYPRE -lcmumps -ldmumps -lsmumps -lzmumps -lmumps_common -lpord -lscalapack -lsuperlu_dist -lflapack -lfblas -lparmetis -lmetis -lptesmumps -lptscotchparmetis -lptscotch -lptscotcherr -lesmumps -lscotch -lscotcherr -lX11 -lmpifort -lgfortran -lgomp -lquadmath -lc++ -lmpicxx -lmpi -lpmpi -lomp -lclang_rt.osx -lm -lz -lstdc++ -ldl I simply don't understand why the hdf5_hl library would get picked up in either case. Shouldn't I have to specify -lhdf5_hl in order for that to happen? On Sun, Jul 14, 2019 at 5:10 PM John Peterson <jwp...@gm...> wrote: > Regarding HDF versions, I can’t remember exactly what we are using at the > moment but I think it’s 1.8.x? I will check and get back to you. > > On Jul 14, 2019, at 5:00 PM, John Peterson <jwp...@gm...> wrote: > > There isn’t a lot of logic behind what gets updated when beyond “as > necessary”. The most recent netcdf update fixed an issue with writing hdf5 > files for us. I didn’t update exodusII at the time since it there was no > specific issue that needed to be addressed, but it is quite old relative to > what’s available on github, so it should probably be done at some point. > > > On Jul 14, 2019, at 4:35 PM, Alexander Lindsay <ale...@gm...> > wrote: > > So I didn't get any replies to this > <https://sourceforge.net/p/libmesh/mailman/message/36709086/>, but now > I'm getting more curious. I'm curious why we fairly consistently update our > netcdf version, but seem to never update our exodusii contrib? Are we sure > that everything is compatible? I ran `make check` on my personal build of > netcdf 4.6.2 with hdf5-1.10.5 and all tests pass, so I'm wondering whether > the incompatibility may be at a higher level? Is our older exodusii source > using "incorrect" APIs in the newer netcdf versions when hdf5 is available? > > I can look this up, but what does netcdf do differently when hdf5 is or is > not enabled in our configure? > > _______________________________________________ > Libmesh-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > > |
From: John P. <jwp...@gm...> - 2019-07-15 00:12:44
|
There isn’t a lot of logic behind what gets updated when beyond “as necessary”. The most recent netcdf update fixed an issue with writing hdf5 files for us. I didn’t update exodusII at the time since it there was no specific issue that needed to be addressed, but it is quite old relative to what’s available on github, so it should probably be done at some point. > On Jul 14, 2019, at 4:35 PM, Alexander Lindsay <ale...@gm...> wrote: > > So I didn't get any replies to this, but now I'm getting more curious. I'm curious why we fairly consistently update our netcdf version, but seem to never update our exodusii contrib? Are we sure that everything is compatible? I ran `make check` on my personal build of netcdf 4.6.2 with hdf5-1.10.5 and all tests pass, so I'm wondering whether the incompatibility may be at a higher level? Is our older exodusii source using "incorrect" APIs in the newer netcdf versions when hdf5 is available? > > I can look this up, but what does netcdf do differently when hdf5 is or is not enabled in our configure? > _______________________________________________ > Libmesh-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: John P. <jwp...@gm...> - 2019-07-15 00:10:57
|
Regarding HDF versions, I can’t remember exactly what we are using at the moment but I think it’s 1.8.x? I will check and get back to you. > On Jul 14, 2019, at 5:00 PM, John Peterson <jwp...@gm...> wrote: > > There isn’t a lot of logic behind what gets updated when beyond “as necessary”. The most recent netcdf update fixed an issue with writing hdf5 files for us. I didn’t update exodusII at the time since it there was no specific issue that needed to be addressed, but it is quite old relative to what’s available on github, so it should probably be done at some point. > > >> On Jul 14, 2019, at 4:35 PM, Alexander Lindsay <ale...@gm...> wrote: >> >> So I didn't get any replies to this, but now I'm getting more curious. I'm curious why we fairly consistently update our netcdf version, but seem to never update our exodusii contrib? Are we sure that everything is compatible? I ran `make check` on my personal build of netcdf 4.6.2 with hdf5-1.10.5 and all tests pass, so I'm wondering whether the incompatibility may be at a higher level? Is our older exodusii source using "incorrect" APIs in the newer netcdf versions when hdf5 is available? >> >> I can look this up, but what does netcdf do differently when hdf5 is or is not enabled in our configure? >> _______________________________________________ >> Libmesh-devel mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: Alexander L. <ale...@gm...> - 2019-07-14 23:36:06
|
So I didn't get any replies to this <https://sourceforge.net/p/libmesh/mailman/message/36709086/>, but now I'm getting more curious. I'm curious why we fairly consistently update our netcdf version, but seem to never update our exodusii contrib? Are we sure that everything is compatible? I ran `make check` on my personal build of netcdf 4.6.2 with hdf5-1.10.5 and all tests pass, so I'm wondering whether the incompatibility may be at a higher level? Is our older exodusii source using "incorrect" APIs in the newer netcdf versions when hdf5 is available? I can look this up, but what does netcdf do differently when hdf5 is or is not enabled in our configure? |
From: Cody P. <cod...@gm...> - 2019-05-22 23:18:03
|
So we have a user that wants to try to start using the "recover" capability in MOOSE to resume a long running simulation on the clusters. This particular simulation is very large and the user is using nemesis output. I was suprised to see that we don't have any nemesis appending tests in all of MOOSE. It appears that all of our nemesis output is on "steady-state" solves, or the recover flag has been explicitly set to false on any transient simulations checked in. When I went to test this capability it did fail with a seg fault and pointed to this line of code: https://github.com/libMesh/libmesh/blob/master/src/mesh/nemesis_io_helper.C#L2485 I verified that the "exodus_node_num_to_libmesh" structure was empty. I can see that at least an attempt has been made to support the "append" functionality in Nemesis but does anyone know for sure that it works? It appears that we might be missing some initialization call or something? Full disclosure - We have additional wrappers around Exodus and Nemesis in MOOSE but they aren't doing anything that I would consider illegal or bad. Those wrappers are creating and using the main Exodus and Nemesis objects through the public API. Before I (or someone else) dives in, do we expect this case to work? Anyone else doing Nemesis appends? Thanks, Cody |
From: Alexander L. <ale...@gm...> - 2019-05-03 16:24:36
|
Can I change Point to inherit from VectorValue<Real>? To generalize FE and FEMap (see libmesh issue here), I'd like to replace a lot of the occurrences in the FEAbstract interface of Point with something like `PointType<RealType>::type` where I define PointType like: template <typename RealType> struct PointType { typedef VectorValue<RealType> type; }; template <> struct PointType<Real> { typedef Point type; }; I'd like the user who's exercising the generalized capability to have some consistency. E.g. they're used to passing in a std::vector of Points when optionally reinit'ing on custom points. But if they want points to be a std::vector of point like things that may include derivative information, they generally can't use raw TypeVectors (because we've restricted how they can construct them); they have to use VectorValues. So if they need to use VectorValues when calling reinit with their custom point type, then I think it would be good if Point itself is a VectorValue<Real>. Alex |
From: Derek G. <fri...@gm...> - 2019-04-26 17:38:34
|
I think we'll try this soon. The idea is that if you have 1000 variables that all have multiple components (say, higher-order monomial on elements). Then we would like all the components to be together so that we can sum each scalar shape function times 1000 components and have it vectorize. As it is now you get all the components for one variable, then the next, then the next.... so there isn't an efficient way to multiply all phi_0 by comp_0 then phi_1 by comp_1 for all variables. I'll open a ticket so we don't collectively forget :-) Thanks, Derek On Fri, Apr 26, 2019 at 8:09 AM Stogner, Roy H <roy...@ic...> wrote: > > On Thu, 25 Apr 2019, Derek Gaston wrote: > > > This is an email from 3 years ago... no one responded :-) > > Man, and I was just starting to feel proud of myself for starting to > catch up on *months*-old issues... > > > This is coming up again because we're looking at "array variables" > > again... and this would be a large optimization. > > Are you sure? IIRC one of Paul's students experimented years ago with > what I had *thought* would be the lowest-hanging fruit on > vectorization, switching the order of the shape-function and > quadrature-point indices in our FE element-local arrays, but then > reported only minimal speedup on assembly: single-digit percentages, > not double. > > > Any comments? > > Well, there's certainly no harm in trying. I'm done digging about in > DofObject for the #2095 changes, and those were actually surprisingly > orthogonal to the dof_number code anyway, so even if I suddenly hanker > to finish #1438 it might not step on any toes. > > We currently have a ton of code that assumes dof_number is sorted > first by owning processor_id, but other than that we're flexible (e.g. > variable vs node sorting) and we should be able to become more > flexible still without breaking anything. > > > I'm working on some low-level optimization stuff... and one of the > > things I want to do is more vectorization when computing the value > > of variables and when computing residuals, etc. I'm using the > > variable groups stuff to be able to do large vector operations. To > > that end... I think that the current choice for dof-ordering within > > variable groups could be changed to be more amenable to > > vectorization. > > > > Currently DofObject uses dof numbering based on this ordering for > variable groups: > > > > id = base + var_in_vg*ncomp + comp > > > > The problem with this is that I would like to do a vector operation that > is like this: > > > > phi_i * all_dofs_in_var_group_corresponding_to_i > > > > With any FE types that have more than one component the above ordering > means that the dofs corresponding to that shape function are spread > > out in memory (i.e. they're NOT contiguous) and that would preclude > vectorization of the above operation. > > So you're doing operations directly on the DoFs, not on evaluations at > quadrature points? > > > Instead, if we use a dof ordering like this: > > > > id = base + comp*n_var_in_vg + var_in_vg > > > > All of the dofs that need to multiply the same shape function would be > contiguous and easily vectorized. > > > > I don't think this change would effect anyone. We've never guaranteed > this ordering (and it's fairly new anyway)... I think everyone is > > probably using the API instead of thinking of raw memory access like > this (And I know I probably should be too... but I've been doing it > > that way for over 10 years and I have a few applications that have > hundreds to tens-of-thousands of variables now that could really use this > > optimization). > > The most obvious catch here is that dof_number is so far into inner > loops that my usual "make as much stuff runtime-selectable as > possible" demand is completely overridden by performance concerns; > this would have to be a configure-time option IMHO. > > If you want to do it yourself I don't see any objections; if you'd > like me to take first crack then start up an issue and assign me so I > don't forget about the idea again? > --- > Roy |
From: Stogner, R. H <roy...@ic...> - 2019-04-26 14:24:50
|
On Thu, 25 Apr 2019, Derek Gaston wrote: > This is an email from 3 years ago... no one responded :-) Man, and I was just starting to feel proud of myself for starting to catch up on *months*-old issues... > This is coming up again because we're looking at "array variables" > again... and this would be a large optimization. Are you sure? IIRC one of Paul's students experimented years ago with what I had *thought* would be the lowest-hanging fruit on vectorization, switching the order of the shape-function and quadrature-point indices in our FE element-local arrays, but then reported only minimal speedup on assembly: single-digit percentages, not double. > Any comments? Well, there's certainly no harm in trying. I'm done digging about in DofObject for the #2095 changes, and those were actually surprisingly orthogonal to the dof_number code anyway, so even if I suddenly hanker to finish #1438 it might not step on any toes. We currently have a ton of code that assumes dof_number is sorted first by owning processor_id, but other than that we're flexible (e.g. variable vs node sorting) and we should be able to become more flexible still without breaking anything. > I'm working on some low-level optimization stuff... and one of the > things I want to do is more vectorization when computing the value > of variables and when computing residuals, etc. I'm using the > variable groups stuff to be able to do large vector operations. To > that end... I think that the current choice for dof-ordering within > variable groups could be changed to be more amenable to > vectorization. > > Currently DofObject uses dof numbering based on this ordering for variable groups: > > id = base + var_in_vg*ncomp + comp > > The problem with this is that I would like to do a vector operation that is like this: > > phi_i * all_dofs_in_var_group_corresponding_to_i > > With any FE types that have more than one component the above ordering means that the dofs corresponding to that shape function are spread > out in memory (i.e. they're NOT contiguous) and that would preclude vectorization of the above operation. So you're doing operations directly on the DoFs, not on evaluations at quadrature points? > Instead, if we use a dof ordering like this: > > id = base + comp*n_var_in_vg + var_in_vg > > All of the dofs that need to multiply the same shape function would be contiguous and easily vectorized. > > I don't think this change would effect anyone. We've never guaranteed this ordering (and it's fairly new anyway)... I think everyone is > probably using the API instead of thinking of raw memory access like this (And I know I probably should be too... but I've been doing it > that way for over 10 years and I have a few applications that have hundreds to tens-of-thousands of variables now that could really use this > optimization). The most obvious catch here is that dof_number is so far into inner loops that my usual "make as much stuff runtime-selectable as possible" demand is completely overridden by performance concerns; this would have to be a configure-time option IMHO. If you want to do it yourself I don't see any objections; if you'd like me to take first crack then start up an issue and assign me so I don't forget about the idea again? --- Roy |
From: Derek G. <fri...@gm...> - 2019-04-25 19:32:05
|
This is an email from 3 years ago... no one responded :-) This is coming up again because we're looking at "array variables" again... and this would be a large optimization. Any comments? Derek On Wed, Aug 3, 2016 at 9:38 AM Derek Gaston <fri...@gm...> wrote: > I'm working on some low-level optimization stuff... and one of the things > I want to do is more vectorization when computing the value of variables > and when computing residuals, etc. I'm using the variable groups stuff to > be able to do large vector operations. > > To that end... I think that the current choice for dof-ordering within > variable groups could be changed to be more amenable to vectorization. > > Currently DofObject uses dof numbering based on this ordering for variable > groups: > > id = base + var_in_vg*ncomp + comp > > The problem with this is that I would like to do a vector operation that > is like this: > > phi_i * all_dofs_in_var_group_corresponding_to_i > > With any FE types that have more than one component the above ordering > means that the dofs corresponding to that shape function are spread out in > memory (i.e. they're NOT contiguous) and that would preclude vectorization > of the above operation. > > Instead, if we use a dof ordering like this: > > id = base + comp*n_var_in_vg + var_in_vg > > All of the dofs that need to multiply the same shape function would be > contiguous and easily vectorized. > > I don't think this change would effect anyone. We've never guaranteed > this ordering (and it's fairly new anyway)... I think everyone is probably > using the API instead of thinking of raw memory access like this (And I > know I probably should be too... but I've been doing it that way for over > 10 years and I have a few applications that have hundreds to > tens-of-thousands of variables now that could really use this optimization). > > What say you? > > Derek > > |
From: John P. <jwp...@gm...> - 2019-04-25 13:33:19
|
On Thu, Apr 25, 2019 at 1:19 AM <mar...@ge...> wrote: > Hi > > There's an error in checkpoint_in.C line 1095 > > #ifdef LIBMESH_ENABLE_UNIQUE_ID > Node * node = > #endif > mesh.add_point(p, id, pid); > > The #ifdef there fails if LIBMESH_ENABLE_UNIQUE_ID is not defined, as > node is used a few lines later. Should this macro be just removed, or > does it have some important consequences? > Thanks for pointing that out, looks like it was just recently changed in 4ad3d936f9 so I'll fix it and add some --disable-unique-id testing if we don't already have it. -- John |
From: <mar...@ge...> - 2019-04-25 06:19:03
|
Hi There's an error in checkpoint_in.C line 1095 #ifdef LIBMESH_ENABLE_UNIQUE_ID Node * node = #endif mesh.add_point(p, id, pid); The #ifdef there fails if LIBMESH_ENABLE_UNIQUE_ID is not defined, as node is used a few lines later. Should this macro be just removed, or does it have some important consequences? Best, Martin -- Dr. Martin Lüthi Department of Geography, 3G University of Zurich Winterthurerstrasse 190 CH-8057 Zurich, Switzerland Phone: +41-44-635-5146 Fax: +41-44-635-6841 Email: mar...@ge... |
From: Stogner, R. H <roy...@ic...> - 2019-04-09 16:02:00
|
On Tue, 9 Apr 2019, Paul T. Bauman wrote: > Is there an easy way to take existing finite elements and combine them? I'm interested in trying out an augmented Taylor-Hood element (P2/P1+P0) > where basically I'm just adding a constant to the P1 pressure space. It would be nice to be able to do this without having to implement all the > methods for a new finite element object, but wasn't sure if anyone had done this before or would know if it's possible. This sounds like a great idea but there's definitely no way to do it right now. What would worry me the most about adding it is that we have too much critical functionality implemented in "switch statements in FEInterface" form, which might have efficiency advantages but is practically impossible to extend at runtime. A FEComposite subclass could handle virtual function responses on the fly but there's got to be a couple hundred FEInterface uses which would have to be upgraded, probably also adding some cost to performance (for object construction, not just for virtual function vs switch statement) in the common case. --- Roy |
From: Paul T. B. <ptb...@gm...> - 2019-04-09 15:26:43
|
Hi All, Is there an easy way to take existing finite elements and combine them? I'm interested in trying out an augmented Taylor-Hood element (P2/P1+P0) where basically I'm just adding a constant to the P1 pressure space. It would be nice to be able to do this without having to implement all the methods for a new finite element object, but wasn't sure if anyone had done this before or would know if it's possible. Thanks in advance. Best, Paul |
From: Boris B. <bor...@bu...> - 2019-03-21 23:52:57
|
On 3/20/19 2:13 PM, Stogner, Roy H wrote: > On Mon, 18 Mar 2019, Boris Boutkov wrote: > >> Out of some curiosity I recently rebased my GMG implementation on to the >> upcoming NBX changes in PR #1965 to do some weak scaling analysis. >> >> I ran an np 256 Poisson problem using GMG with ~10k dofs/proc, and in short >> it, seems like the NBX changes provide some solid improvement bringing my >> total runtime from something like ~19s to ~16s, so a nice step on the road >> to weak GMG scaling. Pre-NBX I had a good chunk (30% total w/sub) of my >> time being spent in the alltoall() which post-NBX is down to 2%! This came >> with a fairly large amount of calls to possible_receive() and now 15% of my >> total time being spent in there, but the overall timing seems to be a win so >> thanks much for this work! > Thanks for the update! > > Greedy question: could you try the same timings at, say, np 16? I was > pretty confident np 256 would be a big win, since the asymptotic > scaling is improved, but it'd be nice to have data points at lower > processor counts too. Sure. I've updated to include the np16 results which can be found at: https://drive.google.com/file/d/1X8U1XcZNNEAOK-z33jFFfKuM6zYRjsji/view?usp=sharing The short of it is that the overall timing is nearly indistinguishable at np16. Also similar to before, the 10% of time spent in alltoall() got offloaded to possibly_receive(), and basically, the heavy performance hits are still the same culprits - but its worth nothing that they are slightly 'heavier' at np256 than at np16 which eventually manifests in the total time increase. Anyways, I'd say at np16 the changes are neutral for this use case. > >> Despite these improvements, the weak scaling for the GMG implementation is >> still a bit lacking unfortunately as np1=~1s. I ran these tests through gperf >> in order to gain some more insight and it looks to me that major components >> slowing down the setup time are still refining/coarsening/distributing_dofs >> which in turn do a lot of nodal parallel consistency adjusting and setting >> nonlocal_dof_objects and am wondering if there are maybe some low hanging >> fruit to improve on around those calls. > There almost certainly is. Could I get comparable results from your > new fem_system_ex1 settings (with more coarse refinements, I mean) to > test with? I ran these studies on a Poisson problem with quad4s, so I think outside of the increased cost of the projections and refinements of the second order information, and if we ignore the solve time increase, the relatively expensive functions in init_and_attach_petscdm() will similarly show up for fem_system_ex1 under increasing mg levels. The other option would be the direct comparison using the soon-to-be-merged multigrid examples in GRINS which is basically whats presented in the attachment. Either way, I'd certainly be interested to learn how this all behaves on other machines because in the past I've seen situations where MPI related optimizations were more pessimistic on my local cluster than on other systems. - Boris |
From: Stogner, R. H <roy...@ic...> - 2019-03-20 18:14:07
|
On Mon, 18 Mar 2019, Boris Boutkov wrote: > Out of some curiosity I recently rebased my GMG implementation on to the > upcoming NBX changes in PR #1965 to do some weak scaling analysis. > > I ran an np 256 Poisson problem using GMG with ~10k dofs/proc, and in short > it, seems like the NBX changes provide some solid improvement bringing my > total runtime from something like ~19s to ~16s, so a nice step on the road > to weak GMG scaling. Pre-NBX I had a good chunk (30% total w/sub) of my > time being spent in the alltoall() which post-NBX is down to 2%! This came > with a fairly large amount of calls to possible_receive() and now 15% of my > total time being spent in there, but the overall timing seems to be a win so > thanks much for this work! Thanks for the update! Greedy question: could you try the same timings at, say, np 16? I was pretty confident np 256 would be a big win, since the asymptotic scaling is improved, but it'd be nice to have data points at lower processor counts too. > Despite these improvements, the weak scaling for the GMG implementation is > still a bit lacking unfortunately as np1=~1s. I ran these tests through gperf > in order to gain some more insight and it looks to me that major components > slowing down the setup time are still refining/coarsening/distributing_dofs > which in turn do a lot of nodal parallel consistency adjusting and setting > nonlocal_dof_objects and am wondering if there are maybe some low hanging > fruit to improve on around those calls. There almost certainly is. Could I get comparable results from your new fem_system_ex1 settings (with more coarse refinements, I mean) to test with? --- Roy |
From: Boris B. <bor...@bu...> - 2019-03-18 19:59:27
|
Hello all, Out of some curiosity I recently rebased my GMG implementation on to the upcoming NBX changes in PR #1965 to do some weak scaling analysis. I ran an np 256 Poisson problem using GMG with ~10k dofs/proc, and in short it, seems like the NBX changes provide some solid improvement bringing my total runtime from something like ~19s to ~16s, so a nice step on the road to weak GMG scaling. Pre-NBX I had a good chunk (30% total w/sub) of my time being spent in the alltoall() which post-NBX is down to 2%! This came with a fairly large amount of calls to possible_receive() and now 15% of my total time being spent in there, but the overall timing seems to be a win so thanks much for this work! Despite these improvements, the weak scaling for the GMG implementation is still a bit lacking unfortunately as np1=~1s. I ran these tests through gperf in order to gain some more insight and it looks to me that major components slowing down the setup time are still refining/coarsening/distributing_dofs which in turn do a lot of nodal parallel consistency adjusting and setting nonlocal_dof_objects and am wondering if there are maybe some low hanging fruit to improve on around those calls. Logs for these runs and the corresponding gperf outputs can be found at [1], and come with the minor note that I had some trouble combining all of the profile_pid logs into a single gperf output file. Basically all the function names get garbled if I pass in all the profile_pid outputs to pprof ( hoping maybe Derek Gaston can provide some tips on how to do this properly? ), but as such the linked 'web' plots are taken from a single representative gprof_pid output. I cross compared a number of such outputs and the results seemed fairly consistent between samples, but obviously this isnt perfect. Anyways, figured I'd post this as an update, please let me know if I can provide anything further surrounding these tests that would be helpful. Thanks as always, - Boris [1]: https://drive.google.com/open?id=1dgHKiC8oStpTR0yJFv5uVb8wyJRIFTkk |
From: Stogner, R. H <roy...@ic...> - 2019-01-29 18:04:54
|
On Tue, 29 Jan 2019, Boris Boutkov wrote: > In short, I cant seem to properly extract the sub-projections needed for GMG > when considering problems with mutiple variables. It seems to me that this is > an issue with maintaining a consistent dof ordering when passing the row and > column index vectors to PetscMatrix::create_submatrix(), an order which needs > to be "in sync" with the ordering of dofs coming from > System::projection_matrix(). How are the orders out of sync, exactly? > For example if I wish to extract the velocity sub-block in a Stokes type > problem, my first attempt was to simply use the > DofMap::local_variable_indices(), concatenating the relevant variable dofs > and passing them to create_submatrix() but this seemed to provide incorrectly > sorted SubMats which later manifests as deteriorated convergence on coarser > grid GMG levels. Are the local_variable_indices results not in ascending order? If not, that would be a bug. If so, what order would you want instead? --- Roy |
From: Boris B. <bor...@bu...> - 2019-01-29 15:54:17
|
Hello again all, I've been working at getting GMG + fieldsplit functioning together and am running into some difficulties which I hope I could get some feedback on. In short, I cant seem to properly extract the sub-projections needed for GMG when considering problems with mutiple variables. It seems to me that this is an issue with maintaining a consistent dof ordering when passing the row and column index vectors to PetscMatrix::create_submatrix(), an order which needs to be "in sync" with the ordering of dofs coming from System::projection_matrix(). For example if I wish to extract the velocity sub-block in a Stokes type problem, my first attempt was to simply use the DofMap::local_variable_indices(), concatenating the relevant variable dofs and passing them to create_submatrix() but this seemed to provide incorrectly sorted SubMats which later manifests as deteriorated convergence on coarser grid GMG levels. I also tried looping over active local elem, and getting their parents old_dof_indicies but this seems to provide the same submat as the earlier attempts and appears to still be ordered by variable. I'll also note that sorting these vectors seems to only make matters worse. Ive been playing with these ideas in a branch [1] which adds a unit test that attempts to add two vars to a system, and extract the whole "sub"matrix which should be equivalent to the global projection and I cant seem to find the right way to do this. Does anyone have any suggestions on some other DoF ordering extraction I can try, or maybe Im going about this in the wrong way altogether? Thanks as always for any information you can provide, - Boris [1] : https://github.com/bboutkov/libmesh/tree/subproj_test |