You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: <ed...@op...> - 2020-05-11 03:08:15
|
Hi. I want to know if libMesh++ is actually using mumps and superlu, and if it is not how to fix it. I just recompiled libMesh++, and I am getting the following message with the reduced_basis_ex7 (this is actually an improvement, because it used to crash at that point). Thanks! #+begin_EXAMPLE ... snip 8< ... ,*************************************************************** ,* Running Example reduced_basis_ex7: ,* ./example-opt -online_mode 1 -pc_factor_mat_solver_package superlu ,*************************************************************** LibMesh was configured with PETSc >= 3.9, but command-line options use deprecated syntax. Skipping now. WARNING! There are options you set that were not used! WARNING! could be spelling mistake, etc! There are 2 unused database options. They are: Option left: name:-online_mode value: 1 Option left: name:-pc_factor_mat_solver_package value: superlu ,*************************************************************** ,* Done Running Example reduced_basis_ex7: ,* ./example-opt -online_mode 1 -pc_factor_mat_solver_package superlu ,*************************************************************** #+end_EXAMPLE I get a similar message above with mumps. I am attaching my config.log as a tar ball ------------------------------------------------- This free account was provided by VFEmail.net - report spam to ab...@vf... ONLY AT VFEmail! - Use our Metadata Mitigator to keep your email out of the NSA's hands! $24.95 ONETIME Lifetime accounts with Privacy Features! 15GB disk! No bandwidth quotas! Commercial and Bulk Mail Options! |
From: David K. <dav...@ak...> - 2020-04-23 14:11:06
|
Hi Nikhil, The RB method is really not intended to be used with such a large number of Aq matrices, since the computational cost of RB grows quite fast with the number of terms in the affine expansion (e.g. IIRC the error bound used in the greedy depends on the number of terms to the power of 4). I think a much better option here would be to try to reduce your number of Aq terms. I don't know what problem you're trying to solve so I don't know if that is possible or not, but I suggest you look into it. One option for this would be to somehow split up your parameter domain to have fewer parameters. Another would be to use EIM since that can compute an approximate affine expansion which may be more efficient than what you're doing. Best regards, David On Thu, Apr 23, 2020 at 1:06 AM Nikhil Vaidya <nik...@gm...> wrote: > Hi, > > I am using reduced basis for my work. The number of Aq matrices in my > problem is very large (~250). Because of this, the memory requirements of > my program are ~200 times the mesh-file size. I did some memory usage study > and found out that the allocate_data_structures() function in > RBConstruction is responsible for most of the memory usage. > > I think this might be due to the large number of Petsc sparse matrices > being constructed. Each Aq matrix is known to have non-zero entries > corresponding to only a small sub-domain of the geometry. All these > sub-domains are separate blocks in the mesh. I get the feeling that for > each sparse matrix memory is being allocated even for the zero entries. I > saw that Petsc has an option MAT_IGNORE_ZERO_ENTRIES. How does one pass > this in libMesh? Please note that I am not using libMesh directly. My code > is a MOOSE app, and I am using the RB functionality of the underlying > libmesh code. > > Best regards, > Nikhil > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Paul T. B. <ptb...@gm...> - 2020-04-23 13:22:20
|
Hi Nikhil, Typically, you would grab the "raw" underlying Mat object and do the PETSc call yourself. This would be accessible from the libMesh::PetscMatrix object. Typically the ImplicitSystem has the system matrix, but I'm not familiar with where these matrices would be cached on the RB side of things. I would use the Oxygen documentation as a starting point: libmesh.github.io (although it seems GitHub is misbehaving ATM). I hope that helps. Best, Paul On Thu, Apr 23, 2020 at 12:06 AM Nikhil Vaidya <nik...@gm...> wrote: > Hi, > > I am using reduced basis for my work. The number of Aq matrices in my > problem is very large (~250). Because of this, the memory requirements of > my program are ~200 times the mesh-file size. I did some memory usage study > and found out that the allocate_data_structures() function in > RBConstruction is responsible for most of the memory usage. > > I think this might be due to the large number of Petsc sparse matrices > being constructed. Each Aq matrix is known to have non-zero entries > corresponding to only a small sub-domain of the geometry. All these > sub-domains are separate blocks in the mesh. I get the feeling that for > each sparse matrix memory is being allocated even for the zero entries. I > saw that Petsc has an option MAT_IGNORE_ZERO_ENTRIES. How does one pass > this in libMesh? Please note that I am not using libMesh directly. My code > is a MOOSE app, and I am using the RB functionality of the underlying > libmesh code. > > Best regards, > Nikhil > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Nikhil V. <nik...@gm...> - 2020-04-23 05:06:41
|
Hi, I am using reduced basis for my work. The number of Aq matrices in my problem is very large (~250). Because of this, the memory requirements of my program are ~200 times the mesh-file size. I did some memory usage study and found out that the allocate_data_structures() function in RBConstruction is responsible for most of the memory usage. I think this might be due to the large number of Petsc sparse matrices being constructed. Each Aq matrix is known to have non-zero entries corresponding to only a small sub-domain of the geometry. All these sub-domains are separate blocks in the mesh. I get the feeling that for each sparse matrix memory is being allocated even for the zero entries. I saw that Petsc has an option MAT_IGNORE_ZERO_ENTRIES. How does one pass this in libMesh? Please note that I am not using libMesh directly. My code is a MOOSE app, and I am using the RB functionality of the underlying libmesh code. Best regards, Nikhil |
From: Ata M. <a.m...@gm...> - 2020-04-21 21:01:46
|
Hi, I just pulled and built the HEAD (0cceba267df9e8f10a9c739a928c0b109039c888) this morning. The SNES variational inequality solvers are break on the headnode when assembling the bounds. For example misc/ex7 now breaks with: (base) node004:/shared/libmesh/Linux-gnu-O-new/examples/miscellaneous/ex7$ mpirun -machinefile $PBS_NODEFILE -np 2 ./example-opt -snes_type vinewtonrsls [0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0]PETSC ERROR: Object is in wrong state [0]PETSC ERROR: Invalid vector [0]PETSC ERROR: See https://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.13.0, Mar 29, 2020 [0]PETSC ERROR: ./example-opt on a Linux-gnu-O named node004 by mesgarnejad Tue Apr 21 16:48:29 2020 [0]PETSC ERROR: Configure options --download-fblaslapack=1 --download-hdf5=1 --download-hypre=1 --download-metis=1 --download-ml=1 --download-mumps=1 --download-parmetis=1 --download-scalapack=1 --download-sprng=1 --download-suitesparse=yes --download-superlu=yes --download-superlu_dist=/shared/petsc-3.13.0/Linux-gnu-O/externalpackages/v6.3.0.tar.gz --download-triangle=1 --download-yaml=1 --with-boost-dir=/shared/opt/boost --with-cmake=yes --with-debugging=0 --with-mpi-dir=/shared/opt/openmpi/gcc-5.4.0 --with-pic --with-shared-libraries=1 --with-valgrind=0 --with-x11=1 PETSC_ARCH=Linux-gnu-O [0]PETSC ERROR: #1 DMRestoreGlobalVector() line 286 in /shared/petsc-3.13.0/src/dm/interface/dmget.c [0]PETSC ERROR: #2 SNESSetWorkVecs() line 758 in /shared/petsc-3.13.0/src/snes/interface/snesut.c [0]PETSC ERROR: #3 SNESSetUp_VI() line 371 in /shared/petsc-3.13.0/src/snes/impls/vi/vi.c [0]PETSC ERROR: #4 SNESSetUp_VINEWTONRSLS() line 711 in /shared/petsc-3.13.0/src/snes/impls/vi/rs/virs.c [0]PETSC ERROR: #5 SNESSetUp() line 3148 in /shared/petsc-3.13.0/src/snes/interface/snes.c And here is the trace out file: [New LWP 17273] [New LWP 17271] [Thread debugging using libthread_db enabled] 0x00000031c34ac90d in waitpid () from /lib64/libc.so.6 To enable execution of this file add add-auto-load-safe-path /shared/opt/gcc/lib64/libstdc++.so.6.0.21-gdb.py line to your configuration file "/home/mesgarnejad/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/home/mesgarnejad/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" #0 0x00000031c34ac90d in waitpid () from /lib64/libc.so.6 #1 0x00000031c343e909 in do_system () from /lib64/libc.so.6 #2 0x00000031c343ec40 in system () from /lib64/libc.so.6 #3 0x00007fc748b56089 in libMesh::print_trace(std::basic_ostream<char, std::char_traits<char> >&) () from /shared/libmesh/Linux-gnu-O-new/lib/libmesh_opt.so.0 #4 0x00007fc748b56b2c in libMesh::write_traceout() () from /shared/libmesh/Linux-gnu-O-new/lib/libmesh_opt.so.0 #5 0x00007fc748b35b1a in libMesh::libmesh_terminate_handler() () from /shared/libmesh/Linux-gnu-O-new/lib/libmesh_opt.so.0 #6 0x00007fc747d03ba6 in __cxxabiv1::__terminate(void (*)()) () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:47 #7 0x00007fc747d03bf1 in std::terminate() () at ../../.././libstdc++-v3/libsupc++/eh_terminate.cc:57 #8 0x00007fc747d03e08 in __cxa_throw () at ../../.././libstdc++-v3/libsupc++/eh_throw.cc:87 #9 0x00007fc7493637d5 in libMesh::PetscNonlinearSolver<double>::solve(libMesh::SparseMatrix<double>&, libMesh::NumericVector<double>&, libMesh::NumericVector<double>&, double, unsigned int) () from /shared/libmesh/Linux-gnu-O-new/lib/libmesh_opt.so.0 #10 0x00007fc7493d352f in libMesh::NonlinearImplicitSystem::solve() () from /shared/libmesh/Linux-gnu-O-new/lib/libmesh_opt.so.0 #11 0x0000000000411073 in Biharmonic::run() () #12 0x000000000041e15f in main () My config.log for libMesh is here: https://www.dropbox.com/s/uhl8isnxbhmr59s/config.log?dl=0 I’ll keep digging to see if I can find the issue. Thanks, Ata |
From: <ed...@op...> - 2020-03-27 05:36:56
|
On 2020-03-26 16:12, Roy Stogner wrote: > > I spent last night working in my underwear and staying up too late, I > haven't seen most of my coworkers in person in weeks, I sometimes > don't shower until nearly dinner time, I'm tormenting my kids with > advanced math and computer science study, I'm eating too much > preservative-laden junk rather than go to the grocery store for fresh > food, and I'm too often on the couch playing video games instead of > getting out of the house for fun. > > So, nothing new. What's this I hear about a quarantine? Nice, it seems that Roy / quarantine = 2 * fun > I'm a naturally paranoid > person with a lot of savings, but I'm also a naturally greedy person > who had too much of that savings in the stock market. What? are you saying that the stock market is speculative? NO! > Joking aside, so far the bad news around here is one extended family > member who's had an old classmate die and another who's lost her job, > but this is still doubling once or twice a week, it's just barely > beginning to overwhelm medical capacity in the US, and it's going to > get vastly worse before it gets better. I'm really sorry. Let's keep high spirits! (and stay safe). All the best :) ------------------------------------------------- This free account was provided by VFEmail.net - report spam to ab...@vf... ONLY AT VFEmail! - Use our Metadata Mitigator to keep your email out of the NSA's hands! $24.95 ONETIME Lifetime accounts with Privacy Features! 15GB disk! No bandwidth quotas! Commercial and Bulk Mail Options! |
From: Roy S. <roy...@ic...> - 2020-03-26 16:28:38
|
On Thu, 26 Mar 2020, John Peterson wrote: > Thanks, I hope everyone is staying healthy as well. I spent last night working in my underwear and staying up too late, I haven't seen most of my coworkers in person in weeks, I sometimes don't shower until nearly dinner time, I'm tormenting my kids with advanced math and computer science study, I'm eating too much preservative-laden junk rather than go to the grocery store for fresh food, and I'm too often on the couch playing video games instead of getting out of the house for fun. So, nothing new. What's this I hear about a quarantine? > I feel fortunate that our software development work can largely > continue on unchanged, but I know it's not the same for everyone. Yeah, I do feel incredibly lucky about that. I'm a naturally paranoid person with a lot of savings, but I'm also a naturally greedy person who had too much of that savings in the stock market. Joking aside, so far the bad news around here is one extended family member who's had an old classmate die and another who's lost her job, but this is still doubling once or twice a week, it's just barely beginning to overwhelm medical capacity in the US, and it's going to get vastly worse before it gets better. --- Roy |
From: John P. <jwp...@gm...> - 2020-03-26 14:36:18
|
On Thu, Mar 26, 2020 at 7:45 AM <ed...@op...> wrote: > Dear libmesh-users, > > I hope that everyone around here is doing well, that we are all fine and > that we can find ways in which to cooperate beyond the adversity. > Please, take care of yourselves and those who surround you :) . > Hi Edgar, Thanks, I hope everyone is staying healthy as well. I feel fortunate that our software development work can largely continue on unchanged, but I know it's not the same for everyone. -- John |
From: <ed...@op...> - 2020-03-26 12:44:58
|
Dear libmesh-users, I hope that everyone around here is doing well, that we are all fine and that we can find ways in which to cooperate beyond the adversity. Please, take care of yourselves and those who surround you :) . ------------------------------------------------- This free account was provided by VFEmail.net - report spam to ab...@vf... ONLY AT VFEmail! - Use our Metadata Mitigator to keep your email out of the NSA's hands! $24.95 ONETIME Lifetime accounts with Privacy Features! 15GB disk! No bandwidth quotas! Commercial and Bulk Mail Options! |
From: John P. <jwp...@gm...> - 2020-03-13 15:10:01
|
On Thu, Mar 12, 2020 at 5:12 AM Nikhil Vaidya <nik...@gm...> wrote: > Hello, > > Does the libmesh MeshBase class provide a way to find out to which > subdomain(s) a particular node belongs? I looked through the documentation, > but couldn't find a function doing this. > Hi Nikhil, Nodes don't belong to a single subdomain in general, so this information is not stored anywhere. You could get the set of all Elems which are "connected" to a given node (see: build_nodes_to_elem_map() in mesh_tools.h), and then use a heuristic, e.g. the minimum connected Elem subdomain id, to "choose" a subdomain for a Node. You could then store this information as an "extra_node_integer" (see MeshBase::add_node_integer() in mesh_base.h) if desired. -- John |
From: Nikhil V. <nik...@gm...> - 2020-03-12 10:12:26
|
Hello, Does the libmesh MeshBase class provide a way to find out to which subdomain(s) a particular node belongs? I looked through the documentation, but couldn't find a function doing this. Best regards, Nikhil |
From: John P. <jwp...@gm...> - 2020-02-14 16:19:10
|
On Fri, Feb 14, 2020 at 6:26 AM Bailey Curzadd <bcu...@gm...> wrote: > I did a lot of testing with dbg and opt builds. The issue seems to be some > optimization flag activated by -march=native when compiling my own code. > Leaving this out resolves all of the spurious memory errors, which were > probably the result of Eigen's fussiness about alignment. The performance > gain in the cases where it worked was pretty minimal, so I don't really > feel like sorting through the list of flags to find the troublemakers. I > wouldn't worry about your implementation; the issue seems to be my own > code, and I have a habit of breaking things. > Ah, I would not have guessed that optimization compiler flags were the issue, glad you figured it out. -- John |
From: Bailey C. <bcu...@gm...> - 2020-02-14 12:27:03
|
I did a lot of testing with dbg and opt builds. The issue seems to be some optimization flag activated by -march=native when compiling my own code. Leaving this out resolves all of the spurious memory errors, which were probably the result of Eigen's fussiness about alignment. The performance gain in the cases where it worked was pretty minimal, so I don't really feel like sorting through the list of flags to find the troublemakers. I wouldn't worry about your implementation; the issue seems to be my own code, and I have a habit of breaking things. Regards, Bailey On Thu, Feb 13, 2020 at 5:58 PM John Peterson <jwp...@gm...> wrote: > > > On Thu, Feb 13, 2020 at 10:41 AM Roy Stogner <roy...@ic...> > wrote: > >> >> On Thu, 13 Feb 2020, Bailey Curzadd wrote: >> >> > /usr/local/lib/libmesh_opt.so.0 >> >> Well, this is the first thing you'll want to change. Often times a >> bug that exhibits as an incomprehensible segmentation fault in opt >> mode will instead trigger a sensible assertion failure in dbg mode. >> > > In addition to compiling in dbg mode, which is a great suggestion, you may > also want to see if you can reproduce the error with a very stripped down > Eigen-only example. We don't do anything particularly controversial with > our Eigen wrappers, for example the EigenSparseLinearSolver does not even > appear to have any "state" which could get messed up between solves. So I > think it's likely that the error is in Eigen itself and you'll need to get > help from those devs, who will likely want an Eigen-only example. > Of course, now that I've said that, we'll probably figure out that it's our > fault somehow :_[ > > -- > John > |
From: John P. <jwp...@gm...> - 2020-02-13 16:58:35
|
On Thu, Feb 13, 2020 at 10:41 AM Roy Stogner <roy...@ic...> wrote: > > On Thu, 13 Feb 2020, Bailey Curzadd wrote: > > > /usr/local/lib/libmesh_opt.so.0 > > Well, this is the first thing you'll want to change. Often times a > bug that exhibits as an incomprehensible segmentation fault in opt > mode will instead trigger a sensible assertion failure in dbg mode. > In addition to compiling in dbg mode, which is a great suggestion, you may also want to see if you can reproduce the error with a very stripped down Eigen-only example. We don't do anything particularly controversial with our Eigen wrappers, for example the EigenSparseLinearSolver does not even appear to have any "state" which could get messed up between solves. So I think it's likely that the error is in Eigen itself and you'll need to get help from those devs, who will likely want an Eigen-only example. Of course, now that I've said that, we'll probably figure out that it's our fault somehow :_[ -- John |
From: John P. <jwp...@gm...> - 2020-02-13 16:46:12
|
On Thu, Feb 13, 2020 at 4:59 AM Bailey Curzadd <bcu...@gm...> wrote: > I developed a code based on libMesh to optimize components subjected to > extreme thermal loads. I use a gradient-based optimizer, and the > optimization problem has lots of constraint functions, so I have to perform > a direct sensitivity analysis where the two systems (thermal and static) > both have to be solved once for each design variable. There are many design > variables, so, for the sake of efficiency, I solve the system with all of > the RHS vectors packed into a matrix. Previously, I did this using PETSc > and SuperLU_DIST. For the sake of maintainability and efficiency, I am > replacing PETSc with Eigen at the moment, and have run into an issue that I > can't understand. > > The systems are evaluated using 4 separate functions (this is relevant > later): > 1) Solve temperatures > 2) Solve displacements > 3) Differentiate temperatures > 4) Differentiate displacements > > The solve steps use libMesh's interface and internal Eigen solver. Since > this can't be used to solve multiple RHS vectors at once, in each of the > differentiation functions I make a copy or const reference of the system > matrix using a mat() method I added to EigenSparseMatrix. I then create my > own Eigen solver, pack the RHS vectors for the sensitivity analysis into a > matrix and solve. I'm testing different variants; some of them work just > fine, but some of them crash, and I see no logical reason why. > > - If I use an Eigen::BiCGSTAB<EigenSM> solver for steps #3 and #4, > everything works as expected, even if I use the matrix copy, which has a > column-major storage order. > - If I change the solver in step #4 to Eigen::SparseLU, there is a > significant efficiency improvement, and everything works fine. > - However, if I change the solver in step #3 to Eigen::SparseLU, or even > just to Eigen::BiCGSTAB<Eigen::SparseMatrix<Number>>, I get the following > error message and the code crashes: > [snipped] > Valgrind confirms the confusing part: the choice of solver in the function > for step #3 (even the choice of the matrix type template parameter), causes > a memory error in the solver used in step #1. If I understand correctly what you are saying, a change to a later stage (#3) in the code somehow triggered a memory error in an earlier stage (#1) of the code. This likely means that there was a memory error in stage #1 all along, but it was just by random chance that you were not triggering it. Then, the change you made to stage #3 appeared to cause the error, but actually it was just uncovering something that was there all along. -- John |
From: Roy S. <roy...@ic...> - 2020-02-13 16:41:12
|
On Thu, 13 Feb 2020, Bailey Curzadd wrote: > /usr/local/lib/libmesh_opt.so.0 Well, this is the first thing you'll want to change. Often times a bug that exhibits as an incomprehensible segmentation fault in opt mode will instead trigger a sensible assertion failure in dbg mode. > Valgrind confirms the confusing part: the choice of solver in the function > for step #3 (even the choice of the matrix type template parameter), causes > a memory error in the solver used in step #1. Would you guys have any idea > why changing the type of an Eigen solver external to the libMesh library > would cause a memory error in the internal solver before the external > solver even comes into scope? This technically *could* be an effect of opt mode, since optimizing compilers are allowed to do all sorts of reordering, under the rule of "if the user code invokes undefined behavior the compiler isn't obligated to care what happens next", but it seems unlikely to me - usually your initialization (assuming you're being literal with the word "scope" makes some function calls across compilation objects, forcing the compiler to be conservative in case those calls have side effects. Eigen is pretty much header-only, so maybe it's not impossible that the compiler really is reordering like crazy in opt mode, but our *shims* to Eigen still force calls to code in .C files. --- Roy |
From: Bailey C. <bcu...@gm...> - 2020-02-13 10:59:21
|
I developed a code based on libMesh to optimize components subjected to extreme thermal loads. I use a gradient-based optimizer, and the optimization problem has lots of constraint functions, so I have to perform a direct sensitivity analysis where the two systems (thermal and static) both have to be solved once for each design variable. There are many design variables, so, for the sake of efficiency, I solve the system with all of the RHS vectors packed into a matrix. Previously, I did this using PETSc and SuperLU_DIST. For the sake of maintainability and efficiency, I am replacing PETSc with Eigen at the moment, and have run into an issue that I can't understand. The systems are evaluated using 4 separate functions (this is relevant later): 1) Solve temperatures 2) Solve displacements 3) Differentiate temperatures 4) Differentiate displacements The solve steps use libMesh's interface and internal Eigen solver. Since this can't be used to solve multiple RHS vectors at once, in each of the differentiation functions I make a copy or const reference of the system matrix using a mat() method I added to EigenSparseMatrix. I then create my own Eigen solver, pack the RHS vectors for the sensitivity analysis into a matrix and solve. I'm testing different variants; some of them work just fine, but some of them crash, and I see no logical reason why. - If I use an Eigen::BiCGSTAB<EigenSM> solver for steps #3 and #4, everything works as expected, even if I use the matrix copy, which has a column-major storage order. - If I change the solver in step #4 to Eigen::SparseLU, there is a significant efficiency improvement, and everything works fine. - However, if I change the solver in step #3 to Eigen::SparseLU, or even just to Eigen::BiCGSTAB<Eigen::SparseMatrix<Number>>, I get the following error message and the code crashes: double free or corruption (out) [e2mbacur2018li:27024] *** Process received signal *** [e2mbacur2018li:27024] Signal: Aborted (6) [e2mbacur2018li:27024] Signal code: (-6) [e2mbacur2018li:27024] [ 0] /lib/x86_64-linux-gnu/libpthread.so.0(+0x12890)[0x7fc2af29b890] [e2mbacur2018li:27024] [ 1] /lib/x86_64-linux-gnu/libc.so.6(gsignal+0xc7)[0x7fc2aeed6e97] [e2mbacur2018li:27024] [ 2] /lib/x86_64-linux-gnu/libc.so.6(abort+0x141)[0x7fc2aeed8801] [e2mbacur2018li:27024] [ 3] /lib/x86_64-linux-gnu/libc.so.6(+0x89897)[0x7fc2aef21897] [e2mbacur2018li:27024] [ 4] /lib/x86_64-linux-gnu/libc.so.6(+0x9090a)[0x7fc2aef2890a] [e2mbacur2018li:27024] [ 5] /lib/x86_64-linux-gnu/libc.so.6(cfree+0x525)[0x7fc2aef2fe75] [e2mbacur2018li:27024] [ 6] /usr/local/lib/libmesh_opt.so.0(_ZN5Eigen8internal8bicgstabINS_3RefIKNS_12SparseMatrixIdLi1EiEELi0ENS_11OuterStrideILin1EEEEENS_5BlockIKNS_6MatrixIdLin1ELi1ELi0ELin1ELi1EEELin1ELi1ELb1EEENS9_ISB_Lin1ELi1ELb1EEENS_22DiagonalPreconditionerIdEEEEbRKT_RKT0_RT1_RKT2_RlRNSN_10RealScalarE+0x5a5a)[0x7fc2b11c3e8a] [e2mbacur2018li:27024] [ 7] /usr/local/lib/libmesh_opt.so.0(_ZN7libMesh23EigenSparseLinearSolverIdE5solveERNS_12SparseMatrixIdEERNS_13NumericVectorIdEES7_dj+0x1946)[0x7fc2b11d7436] [e2mbacur2018li:27024] [ 8] /usr/local/lib/libmesh_opt.so.0(_ZN7libMesh20LinearImplicitSystem5solveEv+0x40d)[0x7fc2b1233f7d] [e2mbacur2018li:27024] [ 9] ./mspfc_td(+0xd635c)[0x557868ecb35c] [e2mbacur2018li:27024] [10] ./mspfc_td(+0xcdf39)[0x557868ec2f39] [e2mbacur2018li:27024] [11] ./mspfc_td(+0x9a63b)[0x557868e8f63b] [e2mbacur2018li:27024] [12] ./mspfc_td(+0x2229a)[0x557868e1729a] [e2mbacur2018li:27024] [13] /lib/x86_64-linux-gnu/libc.so.6(__libc_start_main+0xe7)[0x7fc2aeeb9b97] [e2mbacur2018li:27024] [14] ./mspfc_td(+0x2381a)[0x557868e1881a] [e2mbacur2018li:27024] *** End of error message *** Aborted (core dumped) Valgrind confirms the confusing part: the choice of solver in the function for step #3 (even the choice of the matrix type template parameter), causes a memory error in the solver used in step #1. Would you guys have any idea why changing the type of an Eigen solver external to the libMesh library would cause a memory error in the internal solver before the external solver even comes into scope? I'd appreciate any insight. For some reason, the choice of solver in step #4 seems to have no connection to the issue at all. Below is an excerpt from valgrind. Regards, Bailey ==27128== Invalid free() / delete / delete[] / realloc() ==27128== at 0x4C30D3B: free (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==27128== by 0x57C0E89: bool Eigen::internal::bicgstab<Eigen::Ref<Eigen::SparseMatrix<double, 1, int> const, 0, Eigen::OuterStride<-1> >, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1> const, -1, 1, true>, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>, Eigen::DiagonalPreconditioner<double> >(Eigen::Ref<Eigen::SparseMatrix<double, 1, int> const, 0, Eigen::OuterStride<-1> > const&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1> const, -1, 1, true> const&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>&, Eigen::DiagonalPreconditioner<double> const&, long&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>::RealScalar&) (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x57D4435: libMesh::EigenSparseLinearSolver<double>::solve(libMesh::SparseMatrix<double>&, libMesh::NumericVector<double>&, libMesh::NumericVector<double>&, double, unsigned int) (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x5830F7C: libMesh::LinearImplicitSystem::solve() (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x1DE35B: MinStressPFC_TD::_solve_thermal() (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x1D5F38: MinStressPFC::eval_sys_equations(unsigned int, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&) (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x1A263A: OptimizerMMA_eigen::iterate() (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x12A299: main (in /home/bacur/minstresspfc/mspfc_td) ==27128== Address 0x1905ee00 is 32 bytes inside a block of size 2,880 alloc'd ==27128== at 0x4C2FB0F: malloc (in /usr/lib/valgrind/vgpreload_memcheck-amd64-linux.so) ==27128== by 0x16C97C: Eigen::internal::aligned_malloc(unsigned long) (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x1BF31B: Eigen::PlainObjectBase<Eigen::Matrix<double, -1, 1, 0, -1, 1> >::resize(long, long) (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x57BBBC6: bool Eigen::internal::bicgstab<Eigen::Ref<Eigen::SparseMatrix<double, 1, int> const, 0, Eigen::OuterStride<-1> >, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1> const, -1, 1, true>, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>, Eigen::DiagonalPreconditioner<double> >(Eigen::Ref<Eigen::SparseMatrix<double, 1, int> const, 0, Eigen::OuterStride<-1> > const&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1> const, -1, 1, true> const&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>&, Eigen::DiagonalPreconditioner<double> const&, long&, Eigen::Block<Eigen::Matrix<double, -1, 1, 0, -1, 1>, -1, 1, true>::RealScalar&) (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x57D4435: libMesh::EigenSparseLinearSolver<double>::solve(libMesh::SparseMatrix<double>&, libMesh::NumericVector<double>&, libMesh::NumericVector<double>&, double, unsigned int) (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x5830F7C: libMesh::LinearImplicitSystem::solve() (in /usr/local/lib/libmesh_opt.so.0.0.0) ==27128== by 0x1DE35B: MinStressPFC_TD::_solve_thermal() (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x1D5F38: MinStressPFC::eval_sys_equations(unsigned int, std::vector<double, std::allocator<double> > const&, std::vector<double, std::allocator<double> >&, std::vector<double, std::allocator<double> >&) (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x1A263A: OptimizerMMA_eigen::iterate() (in /home/bacur/minstresspfc/mspfc_td) ==27128== by 0x12A299: main (in /home/bacur/minstresspfc/mspfc_td) |
From: David K. <dav...@ak...> - 2020-02-07 18:53:29
|
On Fri, Feb 7, 2020 at 1:34 PM Nikhil Vaidya <nik...@gm...> wrote: > Ah, I should've put my question clearly. I meant which NumericVector is the > solution saved in? I am asking this because I would like to compute the RB > vs FE error in energy norm and check if it is indeed lower than the error > bound returned by TransientRBEvaluation::rb_solve() > TransientRBConstruction is a subclass of System, and it stores each time step in the System's solution vector. In particular, it calls solve_for_matrix_and_rhs() at each time step, which uses the solution vector. Note also that there is a boolean compute_truth_projection_error that can be used to compute the error between the RB solution and FE solution (that is used in the RB training), so you might want to look into that, and the set_error_temporal_data() function. You'll have to dig around in TransientRBConstruction a bit in order to do what you want, I think. Note that the TransientRB code hasn't had much attention lately, so if you want to make any updates or improvements to the code, please feel free to submit a PR. Best regards, David > > On Fri, Feb 7, 2020, 4:54 PM David Knezevic <dav...@ak...> > wrote: > > > I would like to know where the transient RB truth solution is saved > during > >> the execution of the function TransientRBConstruction::truth_solve(). > >> > > > > I believe it just writes it to the current directory. You specify > > "write_interval" and then it will write out the solution at those > > intervals, e.g. if write_interval = 10, it will write out every 10th time > > step. If write_interval is <= 0, then it doesn't write anything. > > > > Best regards, > > David > > > > > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Nikhil V. <nik...@gm...> - 2020-02-07 18:34:00
|
Ah, I should've put my question clearly. I meant which NumericVector is the solution saved in? I am asking this because I would like to compute the RB vs FE error in energy norm and check if it is indeed lower than the error bound returned by TransientRBEvaluation::rb_solve() Best, Nikhil On Fri, Feb 7, 2020, 4:54 PM David Knezevic <dav...@ak...> wrote: > I would like to know where the transient RB truth solution is saved during >> the execution of the function TransientRBConstruction::truth_solve(). >> > > I believe it just writes it to the current directory. You specify > "write_interval" and then it will write out the solution at those > intervals, e.g. if write_interval = 10, it will write out every 10th time > step. If write_interval is <= 0, then it doesn't write anything. > > Best regards, > David > > |
From: David K. <dav...@ak...> - 2020-02-07 15:54:50
|
> I would like to know where the transient RB truth solution is saved during > the execution of the function TransientRBConstruction::truth_solve(). > I believe it just writes it to the current directory. You specify "write_interval" and then it will write out the solution at those intervals, e.g. if write_interval = 10, it will write out every 10th time step. If write_interval is <= 0, then it doesn't write anything. Best regards, David |
From: Nikhil V. <nik...@gm...> - 2020-02-07 15:46:33
|
Hello, I would like to know where the transient RB truth solution is saved during the execution of the function TransientRBConstruction::truth_solve(). Best regards, Nikhil |
From: John P. <jwp...@gm...> - 2020-01-24 19:40:24
|
On Fri, Jan 24, 2020 at 1:33 PM Prashant K. Jha <pjh...@gm...> wrote: > Hi All, > > <https://prashjha.github.io/>I am building a network of 1-d EDGE2 elements > embedded in 2-d or 3-d. At any given node of the network, it may have more > than 2 elements intersecting. To enforce continuity of flux at such nodes, > I have to add a constraint relating the dofs of intersecting elements. I am > using Penalty method to enforce the continuity. > > When I am adding value to (i,j) element of the matrix where i and j are > dofs related by constraint, I get Petsc malloc error and it suggests to use > MatSetOption on the matrix. I do that as follows: > > > > > *auto &pres = > net_sys.add_system<TransientLinearImplicitSystem>("Pressure_1D"); > pres.add_variable("pressure_1d", FIRST);* > > > *net_sys.init();* > > > > *PetscMatrix<Number> *pet_mat = > dynamic_cast<PetscMatrix<Number>*>(pres.matrix);* > > *MatSetOption( pet_mat.mat(), MAT_NEW_NONZERO_ALLOCATION_ERR, > PETSC_FALSE);* > > Casting matrix associated to pres system into Petsc matrix gives following > error: > > > *model.cpp:1163:53: error: cannot dynamic_cast ‘(& > > pres)->libMesh::TransientSystem<libMesh::LinearImplicitSystem>::<anonymous>.libMesh::LinearImplicitSystem::<anonymous>.libMesh::ImplicitSystem::matrix’ > (of type ‘class libMesh::SparseMatrix<double>*’) to type ‘class > libMesh::PetscMatrix<double>*’ (target is not pointer or reference to > complete type) dynamic_cast<PetscMatrix<Number>*>(pres.matrix);* > > There is not much available online. If someone know how to resolve this, > please share the information. My final goal is to constrain the dofs and > would not mind applying an alternative approach. > I think the issue here is you need to #include "libmesh/petsc_matrix.h" since otherwise PetscMatrix is not a complete type. -- John |
From: Prashant K. J. <pjh...@gm...> - 2020-01-24 19:32:54
|
Hi All, <https://prashjha.github.io/>I am building a network of 1-d EDGE2 elements embedded in 2-d or 3-d. At any given node of the network, it may have more than 2 elements intersecting. To enforce continuity of flux at such nodes, I have to add a constraint relating the dofs of intersecting elements. I am using Penalty method to enforce the continuity. When I am adding value to (i,j) element of the matrix where i and j are dofs related by constraint, I get Petsc malloc error and it suggests to use MatSetOption on the matrix. I do that as follows: *auto &pres = net_sys.add_system<TransientLinearImplicitSystem>("Pressure_1D"); pres.add_variable("pressure_1d", FIRST);* *net_sys.init();* *PetscMatrix<Number> *pet_mat = dynamic_cast<PetscMatrix<Number>*>(pres.matrix);* *MatSetOption( pet_mat.mat(), MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE);* Casting matrix associated to pres system into Petsc matrix gives following error: *model.cpp:1163:53: error: cannot dynamic_cast ‘(& pres)->libMesh::TransientSystem<libMesh::LinearImplicitSystem>::<anonymous>.libMesh::LinearImplicitSystem::<anonymous>.libMesh::ImplicitSystem::matrix’ (of type ‘class libMesh::SparseMatrix<double>*’) to type ‘class libMesh::PetscMatrix<double>*’ (target is not pointer or reference to complete type) dynamic_cast<PetscMatrix<Number>*>(pres.matrix);* There is not much available online. If someone know how to resolve this, please share the information. My final goal is to constrain the dofs and would not mind applying an alternative approach. Thanks and regards, Prashant |
From: gmail <a.m...@gm...> - 2020-01-24 19:03:33
|
Hi, I pulled and built libMesh after a long time (~2 years) recently and noticed that the new implementation of PetscDM does not work with DirichletBoundary class anymore. Anytime I try to set the dirichlet BCs for the problem when running with petscdm (i.e. using SNES types vinewtonrsls or vinewtonssls) it behaves as if the boundary value is zero! The problem disappears, if I not use PetscDM (use newtonls). The old implementation used to work with very well with dirichlet BCs and it’s such a shame if the new one cannot use that part of the package. My questions is if this is a bug or am I supposed to do something differently for example when I assemble the residual and jacobian? Best, Ata |
From: David K. <dav...@ak...> - 2020-01-23 14:36:03
|
As I guess you noticed, TransientRBConstruction has some functionality to read in an initial condition from an Xdr file. You can make an Xdr file in libMesh by generating a solution (e.g. by projection a function via project_solution) and writing it out, via EquationSystems::write(). However, it's been a long time since I used this functionality TransientRBConstruction so I don't remember exactly which approach you should use to generate this initial condition, so you may have to experiment a bit, and I can help further if needed. Best regards, David On Thu, Jan 23, 2020 at 8:27 AM Nikhil Vaidya <nik...@gm...> wrote: > Hello, > > I am using the reduced basis module. I need to provide a nonzero intial > condition. How can I write an XDR file for this? > > Best regards, > Nikhil Vaidya > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |