You can subscribe to this list here.
2003 |
Jan
(4) |
Feb
(1) |
Mar
(9) |
Apr
(2) |
May
(7) |
Jun
(1) |
Jul
(1) |
Aug
(4) |
Sep
(12) |
Oct
(8) |
Nov
(3) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(21) |
Mar
(31) |
Apr
(10) |
May
(12) |
Jun
(15) |
Jul
(4) |
Aug
(6) |
Sep
(5) |
Oct
(11) |
Nov
(43) |
Dec
(13) |
2005 |
Jan
(25) |
Feb
(12) |
Mar
(49) |
Apr
(19) |
May
(104) |
Jun
(60) |
Jul
(10) |
Aug
(42) |
Sep
(15) |
Oct
(12) |
Nov
(6) |
Dec
(4) |
2006 |
Jan
(1) |
Feb
(6) |
Mar
(31) |
Apr
(17) |
May
(5) |
Jun
(95) |
Jul
(38) |
Aug
(44) |
Sep
(6) |
Oct
(8) |
Nov
(21) |
Dec
|
2007 |
Jan
(5) |
Feb
(46) |
Mar
(9) |
Apr
(23) |
May
(17) |
Jun
(51) |
Jul
(41) |
Aug
(4) |
Sep
(28) |
Oct
(71) |
Nov
(193) |
Dec
(20) |
2008 |
Jan
(46) |
Feb
(46) |
Mar
(18) |
Apr
(38) |
May
(14) |
Jun
(107) |
Jul
(50) |
Aug
(115) |
Sep
(84) |
Oct
(96) |
Nov
(105) |
Dec
(34) |
2009 |
Jan
(89) |
Feb
(93) |
Mar
(119) |
Apr
(73) |
May
(39) |
Jun
(51) |
Jul
(27) |
Aug
(8) |
Sep
(91) |
Oct
(90) |
Nov
(77) |
Dec
(67) |
2010 |
Jan
(25) |
Feb
(36) |
Mar
(98) |
Apr
(45) |
May
(25) |
Jun
(60) |
Jul
(17) |
Aug
(36) |
Sep
(48) |
Oct
(45) |
Nov
(65) |
Dec
(39) |
2011 |
Jan
(26) |
Feb
(48) |
Mar
(151) |
Apr
(108) |
May
(61) |
Jun
(108) |
Jul
(27) |
Aug
(50) |
Sep
(43) |
Oct
(43) |
Nov
(27) |
Dec
(37) |
2012 |
Jan
(56) |
Feb
(120) |
Mar
(72) |
Apr
(57) |
May
(82) |
Jun
(66) |
Jul
(51) |
Aug
(75) |
Sep
(166) |
Oct
(232) |
Nov
(284) |
Dec
(105) |
2013 |
Jan
(168) |
Feb
(151) |
Mar
(30) |
Apr
(145) |
May
(26) |
Jun
(53) |
Jul
(76) |
Aug
(33) |
Sep
(23) |
Oct
(72) |
Nov
(125) |
Dec
(38) |
2014 |
Jan
(47) |
Feb
(62) |
Mar
(27) |
Apr
(8) |
May
(12) |
Jun
(2) |
Jul
(22) |
Aug
(22) |
Sep
|
Oct
(17) |
Nov
(20) |
Dec
(12) |
2015 |
Jan
(25) |
Feb
(2) |
Mar
(16) |
Apr
(13) |
May
(21) |
Jun
(5) |
Jul
(1) |
Aug
(8) |
Sep
(9) |
Oct
(30) |
Nov
(8) |
Dec
|
2016 |
Jan
(16) |
Feb
(31) |
Mar
(43) |
Apr
(18) |
May
(21) |
Jun
(11) |
Jul
(17) |
Aug
(26) |
Sep
(4) |
Oct
(16) |
Nov
(5) |
Dec
(6) |
2017 |
Jan
(1) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(1) |
Jun
(11) |
Jul
(5) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
2018 |
Jan
(8) |
Feb
(8) |
Mar
(1) |
Apr
|
May
(5) |
Jun
(11) |
Jul
|
Aug
(51) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(2) |
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: David K. <dav...@ak...> - 2016-10-07 01:29:56
|
I'm using GhostingFunctor for a contact solve, in which I consider a 1/4 domain with partial Dirichlet boundary conditions that impose a symmetry condition (i.e. displacement normal to the symmetry boundary is clamped to zero, and tangential displacement is unconstrained). This means that I have Dirichlet constraints that affect the dofs on the contact surface. What I find is that the solve works fine in serial, but in parallel the nonlinear convergence fails, presumably because of an incorrect Jacobian. I have actually run into this exact issue before (when I was augmenting the sparsity pattern "manually", prior to GhostingFunctor) and I found that the issue was that the dof constraints on the contact surface were not being communicated in parallel, which caused the incorrect Jacobian in parallel. I fixed it by adding artificial Edge2 elements into the mesh (like in systems_of_equations_ex8) to ensure that the dof constraints are communicated properly in parallel. So, my question is, is there a way to achieve the necessary dof constraint communication with the new GhostingFunctor API? I had hoped that using "add_algebraic_ghosting_functor" would do the job, but it apparently doesn't. Thanks, David |
From: John P. <jwp...@gm...> - 2016-10-05 19:25:28
|
On Wed, Oct 5, 2016 at 12:11 PM, Boris Boutkov <bor...@bu...> wrote: > Hello all, > > I was wondering what algorithm or method libMesh uses to renumber nodes > and elements, particularly after uniform mesh refinements or after > equation_system reinits. > > I understand that its designed to help create contiguous blocks on > processors, but was wondering if there are any more details you can provide > as to the details of the implementation such as the specific method used. > For context, I'm investigating how much the node ordering can affect > Multigrid convergence rates when comparing different node numbering > algorithms. > I don't think it is necessarily designed to create contiguous blocks on processors, although it might do that for Cartesian grids? If you look at ReplicatedMesh::renumber_nodes_and_elements (), you can see the algorithm that's used for ReplicatedMesh. -- John |
From: Boris B. <bor...@bu...> - 2016-10-05 18:38:34
|
Hello all, I was wondering what algorithm or method libMesh uses to renumber nodes and elements, particularly after uniform mesh refinements or after equation_system reinits. I understand that its designed to help create contiguous blocks on processors, but was wondering if there are any more details you can provide as to the details of the implementation such as the specific method used. For context, I'm investigating how much the node ordering can affect Multigrid convergence rates when comparing different node numbering algorithms. Thanks for your time, Boris Boutkov |
From: Boris B. <bor...@bu...> - 2016-09-26 18:45:27
|
Hello again all, Ive been digging at this problem some more and I have some followup questions / observations. I've noticed that the constraint storage is split up into a number components, and would appreciate some basic info as to what these pieces represent. The assert I am/was tripping was with the DofConstraintRow being empty, a row which is inserted to the _dof_constraints map which pairs row with a dof id. Now what I notice is that when I try and use print_dof_constraints(), the values actually being printed are obtained through the _primal_constraint_values map and are simply paired with the _dof_constraint ids. This begs the question what is actually being stored (or should be stored) in an individual DofConstraintRow, or is this some kind of intermediate object that at some point gets moved to other constrain maps and then cleared? Ive also noticed a _node_constraints map, as well as a _stashed_constraints, both of which appear to be unused, though judging by their names could potentially apply to my situation. What are the conceptual differences between the _dof, _node, _primal, and _stashed maps? Finally, in terms of the overall constraint implementation process, what does the R^T * K * C constraint matrix thats mentioned in the comments of constrain_element_matrix refer to? Is it related to the constraint formulation of Mark Shephard in his "Linear multipoint constraints applied via transformation as part of a direct stiffness assembly process" paper, or is there another resource which more closely mimics the overall philosophy libMesh is adhering when building up this constraint matrix? Thanks again for your time, Boris On Thu, Aug 25, 2016 at 10:09 PM, Roy Stogner <roy...@ic...> wrote: > > On Thu, 25 Aug 2016, Paul T. Bauman wrote: > > On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwp...@gm...> >> wrote: >> >> > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <bor...@bu...> >> wrote: >> > > Hm - uniform refinement, yes. Are the boundary nodes on the >> > > outside edges of refined elements not considered to be hanging >> > > nodes? Apologies if I'm misusing the term. >> >> >> > Hanging nodes to me means "must be constrained to be compatible >> > with a coarser neighbor", which isn't possible on the boundary. >> >> > I'm not sure what's causing the assert for you, we'll probably >> > need Roy to comment on the different ways of constraining element >> > matrices, but again, for uniform refinement I would not expect >> > there to be *any* constraints. >> >> For the boundary nodes, any DirichletBoundary constraints would be >> handled through that code path, though, right? >> > > That's correct. > --- > Roy > ------------------------------------------------------------ > ------------------ > > _______________________________________________ > Libmesh-devel mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-devel > > |
From: Kong (Non-US), F. <fan...@in...> - 2016-09-14 21:23:43
|
Thanks a lot, John. Fande, On Wed, Sep 14, 2016 at 3:20 PM, John Peterson <jwp...@gm...> wrote: > > > On Wed, Sep 14, 2016 at 2:39 PM, Kong (Non-US), Fande <fan...@in...> > wrote: > >> Hi Developers, Users, >> >> I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: >> >> *typedef TransientSystem<EigenSystem> TransientEigenSystem;* >> >> I will eventually use this in MOOSE. I went to >> examples/eigenproblems/eigenproblems_ex1 to add one line to test this: >> > > I think we got this figured out, we were just missing an explicit > instantiation in transient_system.C... > > -- > John > |
From: John P. <jwp...@gm...> - 2016-09-14 21:20:41
|
On Wed, Sep 14, 2016 at 2:39 PM, Kong (Non-US), Fande <fan...@in...> wrote: > Hi Developers, Users, > > I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: > > *typedef TransientSystem<EigenSystem> TransientEigenSystem;* > > I will eventually use this in MOOSE. I went to examples/eigenproblems/eigenproblems_ex1 > to add one line to test this: > I think we got this figured out, we were just missing an explicit instantiation in transient_system.C... -- John |
From: Kong (Non-US), F. <fan...@in...> - 2016-09-14 21:02:24
|
Hi Developers, Users, I am trying to use a "typedef" to wrap EigenSystem into TransientSystem: *typedef TransientSystem<EigenSystem> TransientEigenSystem;* I will eventually use this in MOOSE. I went to examples/eigenproblems/eigenproblems_ex1 to add one line to test this: *TransientEigenSystem & trst_eigen_system3 = equation_systems.add_system<TransientEigenSystem> ("Transient_Eigen_System");* There is a link error: *CXX example_opt-eigenproblems_ex1.o CXXLD example-optUndefined symbols for architecture x86_64: "libMesh::TransientSystem<libMesh::EigenSystem>::TransientSystem(libMesh::EquationSystems&, std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&, unsigned int)", referenced from: libMesh::TransientSystem<libMesh::EigenSystem>& libMesh::EquationSystems::add_system<libMesh::TransientSystem<libMesh::EigenSystem> >(std::__1::basic_string<char, std::__1::char_traits<char>, std::__1::allocator<char> > const&) in example_opt-eigenproblems_ex1.old: symbol(s) not found for architecture x86_64clang-3.7: error: linker command failed with exit code 1 (use -v to see invocation)make: *** [example-opt] Error 1* All files I have changed are: *diff --git a/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C b/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.Cindex 7b4a909..eb8a597 100644--- a/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C+++ b/examples/eigenproblems/eigenproblems_ex1/eigenproblems_ex1.C@@ -47,6 +47,7 @@ #include "libmesh/sparse_matrix.h" #include "libmesh/numeric_vector.h" #include "libmesh/dof_map.h"+#include "libmesh/transient_system.h" // Bring in everything from the libMesh namespace using namespace libMesh;@@ -113,6 +114,10 @@ int main (int argc, char ** argv) EigenSystem & eigen_system = equation_systems.add_system<EigenSystem> ("Eigensystem"); + // Fails+ TransientEigenSystem & trst_eigen_system3 =+ equation_systems.add_system<TransientEigenSystem> ("Transient_Eigen_System");+ // Declare the system variables. // Adds the variable "p" to "Eigensystem". "p" // will be approximated using second-order approximation.diff --git a/include/systems/transient_system.h b/include/systems/transient_system.hindex 4bc529d..1ffe4c0 100644--- a/include/systems/transient_system.h+++ b/include/systems/transient_system.h@@ -22,6 +22,7 @@ // Local Includes #include "libmesh/system.h"+#include "libmesh/libmesh_config.h" // C++ includes @@ -33,6 +34,7 @@ namespace libMesh class LinearImplicitSystem; class NonlinearImplicitSystem; class ExplicitSystem;+// class EigenSystem; /** * This class provides a specific system class. It aims@@ -146,7 +148,9 @@ typedef TransientSystem<LinearImplicitSystem> TransientLinearImplicitSystem; typedef TransientSystem<NonlinearImplicitSystem> TransientNonlinearImplicitSystem; typedef TransientSystem<ExplicitSystem> TransientExplicitSystem; typedef TransientSystem<System> TransientBaseSystem;-+//#if LIBMESH_HAVE_SLEPC+typedef TransientSystem<EigenSystem> TransientEigenSystem;+//#endif // ------------------------------------------------------------diff --git a/src/systems/equation_systems.C b/src/systems/equation_systems.Cindex ee925a6..1bbea21 100644--- a/src/systems/equation_systems.C+++ b/src/systems/equation_systems.C@@ -408,6 +408,8 @@ System & EquationSystems::add_system (const std::string & sys_type, // build an eigen system else if (sys_type == "Eigen") this->add_system<EigenSystem> (name);+ else if (sys_type == "TransientEigenSystem")+ this->add_system<TransientEigenSystem> (name); #endif #if defined(LIBMESH_USE_COMPLEX_NUMBERS)* Thanks. Fande Kong |
From: Roy S. <roy...@ic...> - 2016-08-26 03:07:10
|
On Thu, 25 Aug 2016, Paul T. Bauman wrote: > On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwp...@gm...> wrote: > > > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <bor...@bu...> wrote: > > > Hm - uniform refinement, yes. Are the boundary nodes on the > > > outside edges of refined elements not considered to be hanging > > > nodes? Apologies if I'm misusing the term. > > > > Hanging nodes to me means "must be constrained to be compatible > > with a coarser neighbor", which isn't possible on the boundary. > > > I'm not sure what's causing the assert for you, we'll probably > > need Roy to comment on the different ways of constraining element > > matrices, but again, for uniform refinement I would not expect > > there to be *any* constraints. > > For the boundary nodes, any DirichletBoundary constraints would be > handled through that code path, though, right? That's correct. --- Roy |
From: Paul T. B. <ptb...@gm...> - 2016-08-26 00:06:56
|
Crap didn't reply-all. On Thu, Aug 25, 2016 at 5:45 PM, John Peterson <jwp...@gm...> wrote: > > > On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <bor...@bu...> > wrote: > >> Hm - uniform refinement, yes. Are the boundary nodes on the outside edges >> of refined elements not considered to be hanging nodes? Apologies if I'm >> misusing the term. >> > > Hanging nodes to me means "must be constrained to be compatible with a > coarser neighbor", which isn't possible on the boundary. > > I'm not sure what's causing the assert for you, we'll probably need Roy to > comment on the different ways of constraining element matrices, but again, > for uniform refinement I would not expect there to be *any* constraints. > For the boundary nodes, any DirichletBoundary constraints would be handled through that code path, though, right? |
From: John P. <jwp...@gm...> - 2016-08-25 21:46:05
|
On Thu, Aug 25, 2016 at 3:38 PM, Boris Boutkov <bor...@bu...> wrote: > Hm - uniform refinement, yes. Are the boundary nodes on the outside edges > of refined elements not considered to be hanging nodes? Apologies if I'm > misusing the term. > Hanging nodes to me means "must be constrained to be compatible with a coarser neighbor", which isn't possible on the boundary. I'm not sure what's causing the assert for you, we'll probably need Roy to comment on the different ways of constraining element matrices, but again, for uniform refinement I would not expect there to be *any* constraints. -- John |
From: Boris B. <bor...@bu...> - 2016-08-25 21:39:06
|
Hm - uniform refinement, yes. Are the boundary nodes on the outside edges of refined elements not considered to be hanging nodes? Apologies if I'm misusing the term. On Thu, Aug 25, 2016 at 5:28 PM, John Peterson <jwp...@gm...> wrote: > > > On Thu, Aug 25, 2016 at 11:57 AM, Boris Boutkov <bor...@bu...> > wrote: > >> Hello all, (apologies in advance for the double post, looks like I wasn't >> subscribed properly earlier), >> >> Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been >> getting some unexpected convergence results that don't quite line up with >> theory. >> >> Currently I am uniformly refining a simple square grid with Dirichlet BCs >> and manually element-wise constructing L2 projected interpolation matrices >> between mesh levels while constraining each projection using the four >> argument version of constrain_element_matrix. As a naive first attempt I >> turned asymmetric_constrain_rows to false, as it helped me get past the >> `!constraint_row.empty()' assertion which I didn't quite understand. >> >> Having now seen some of the convergence results I think maybe I was too >> hasty in turning this asymmetric switch off as my iterative solvers are >> having some difficulties on some mesh levels while being OK on others, my >> hunch is this might be related to projections of hanging boundary nodes I >> introduce while refining - but this leaves me stuck on the above assert. >> > > You said you were doing uniform refinement? I'm confused by "hanging > boundary nodes". > > -- > John > |
From: John P. <jwp...@gm...> - 2016-08-25 21:29:11
|
On Thu, Aug 25, 2016 at 11:57 AM, Boris Boutkov <bor...@bu...> wrote: > Hello all, (apologies in advance for the double post, looks like I wasn't > subscribed properly earlier), > > Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been > getting some unexpected convergence results that don't quite line up with > theory. > > Currently I am uniformly refining a simple square grid with Dirichlet BCs > and manually element-wise constructing L2 projected interpolation matrices > between mesh levels while constraining each projection using the four > argument version of constrain_element_matrix. As a naive first attempt I > turned asymmetric_constrain_rows to false, as it helped me get past the > `!constraint_row.empty()' assertion which I didn't quite understand. > > Having now seen some of the convergence results I think maybe I was too > hasty in turning this asymmetric switch off as my iterative solvers are > having some difficulties on some mesh levels while being OK on others, my > hunch is this might be related to projections of hanging boundary nodes I > introduce while refining - but this leaves me stuck on the above assert. > You said you were doing uniform refinement? I'm confused by "hanging boundary nodes". -- John |
From: Boris B. <bor...@bu...> - 2016-08-25 17:58:09
|
Hello all, (apologies in advance for the double post, looks like I wasn't subscribed properly earlier), Ive been working on some libMesh/GRINS/PETSc Multigrid code and have been getting some unexpected convergence results that don't quite line up with theory. Currently I am uniformly refining a simple square grid with Dirichlet BCs and manually element-wise constructing L2 projected interpolation matrices between mesh levels while constraining each projection using the four argument version of constrain_element_matrix. As a naive first attempt I turned asymmetric_constrain_rows to false, as it helped me get past the `!constraint_row.empty()' assertion which I didn't quite understand. Having now seen some of the convergence results I think maybe I was too hasty in turning this asymmetric switch off as my iterative solvers are having some difficulties on some mesh levels while being OK on others, my hunch is this might be related to projections of hanging boundary nodes I introduce while refining - but this leaves me stuck on the above assert. Could someone please provide some kind of introductory reference on how this constraint process works as a whole, or what might be causing me to trip this assert? I notice that this assert is disabled in the 3 argument version and I guess am just wondering what is the best way to understand why the DofConstraintRow might be empty in this situation. Thanks for your time, Boris Boutkov |
From: John P. <jwp...@gm...> - 2016-08-24 18:54:59
|
On Wed, Aug 24, 2016 at 12:48 PM, Cody Permann <cod...@gm...> wrote: > Well we'd need to redirect stderr too or we'd still have interleaved > messages on that stream. I'm not sure I care to really redirect. I think > I'd rather just drop them and print a message about running without that > flag if the exit code is non-zero. > OK, sorry I was confused, but yes, I think it would be good to add an option that *specifically* drops std::cerr on non-0 ranks (what we currently do by default for std::cout). -- John |
From: Cody P. <cod...@gm...> - 2016-08-24 18:48:38
|
Well we'd need to redirect stderr too or we'd still have interleaved messages on that stream. I'm not sure I care to really redirect. I think I'd rather just drop them and print a message about running without that flag if the exit code is non-zero. On Wed, Aug 24, 2016 at 12:08 PM John Peterson <jwp...@gm...> wrote: > On Wed, Aug 24, 2016 at 9:02 AM, Cody Permann <cod...@gm...> > wrote: > >> I'd like to propose a new stream redirect/suppress option or perhaps just >> expand the existing option in libMesh. >> >> Motivation: >> Our regression testing system has long struggled with the proper way to >> handle interleaved output due to multiple processors triggering expected >> errors in application code and outputting to the shared console. We've >> talked about using stdout vs stderr several times but the truth is we have >> the need to use both output streams for various uses (non-fatal warnings >> versus fatal errors, etc) and all of these things also need to be tested. >> >> Neither one of the existing options (--redirect-stdout, --keep-cout) are >> ideal for creating a robust testing system. The redirect option is close to >> what I'd like to use but the problem is I don't like the fact that using >> that option makes the application completely silent on the terminal. Using >> this option in the testing system would be problematic for some of our less >> savvy users who copy 'n' paste the run commands when replicating errors on >> their system. Also, it would create a lot of extra output file management >> problems with running several tests at once in parallel potentially in the >> same directory (a common MOOSE occurrence). >> >> I'd like a new option that allows us to suppress the output from all >> ranks except the master on both output streams. This would make the output >> of error messages encountered on all ranks always clean so that we wouldn't >> have spurious failures on new tests looking for error or warning messages >> in the console output. I would also propose that we add additional logic to >> the main libMesh destructor to print a message about streams being >> suppressed when the library terminates with an error code in case an error >> is triggered on a non-master rank and the error message isn't seen on the >> terminal. >> >> TL;DR; >> I'd like a new stream suppression option in libMesh, perhaps >> "--suppress-parallel-output"? >> > > Seems reasonable, but I thought what you were originally after was a > --redirect-all-but-processor0-stdout flag?? > > -- > John > |
From: John P. <jwp...@gm...> - 2016-08-24 18:08:09
|
On Wed, Aug 24, 2016 at 9:02 AM, Cody Permann <cod...@gm...> wrote: > I'd like to propose a new stream redirect/suppress option or perhaps just > expand the existing option in libMesh. > > Motivation: > Our regression testing system has long struggled with the proper way to > handle interleaved output due to multiple processors triggering expected > errors in application code and outputting to the shared console. We've > talked about using stdout vs stderr several times but the truth is we have > the need to use both output streams for various uses (non-fatal warnings > versus fatal errors, etc) and all of these things also need to be tested. > > Neither one of the existing options (--redirect-stdout, --keep-cout) are > ideal for creating a robust testing system. The redirect option is close to > what I'd like to use but the problem is I don't like the fact that using > that option makes the application completely silent on the terminal. Using > this option in the testing system would be problematic for some of our less > savvy users who copy 'n' paste the run commands when replicating errors on > their system. Also, it would create a lot of extra output file management > problems with running several tests at once in parallel potentially in the > same directory (a common MOOSE occurrence). > > I'd like a new option that allows us to suppress the output from all ranks > except the master on both output streams. This would make the output of > error messages encountered on all ranks always clean so that we wouldn't > have spurious failures on new tests looking for error or warning messages > in the console output. I would also propose that we add additional logic to > the main libMesh destructor to print a message about streams being > suppressed when the library terminates with an error code in case an error > is triggered on a non-master rank and the error message isn't seen on the > terminal. > > TL;DR; > I'd like a new stream suppression option in libMesh, perhaps > "--suppress-parallel-output"? > Seems reasonable, but I thought what you were originally after was a --redirect-all-but-processor0-stdout flag?? -- John |
From: Cody P. <cod...@gm...> - 2016-08-24 15:02:56
|
I'd like to propose a new stream redirect/suppress option or perhaps just expand the existing option in libMesh. Motivation: Our regression testing system has long struggled with the proper way to handle interleaved output due to multiple processors triggering expected errors in application code and outputting to the shared console. We've talked about using stdout vs stderr several times but the truth is we have the need to use both output streams for various uses (non-fatal warnings versus fatal errors, etc) and all of these things also need to be tested. Neither one of the existing options (--redirect-stdout, --keep-cout) are ideal for creating a robust testing system. The redirect option is close to what I'd like to use but the problem is I don't like the fact that using that option makes the application completely silent on the terminal. Using this option in the testing system would be problematic for some of our less savvy users who copy 'n' paste the run commands when replicating errors on their system. Also, it would create a lot of extra output file management problems with running several tests at once in parallel potentially in the same directory (a common MOOSE occurrence). I'd like a new option that allows us to suppress the output from all ranks except the master on both output streams. This would make the output of error messages encountered on all ranks always clean so that we wouldn't have spurious failures on new tests looking for error or warning messages in the console output. I would also propose that we add additional logic to the main libMesh destructor to print a message about streams being suppressed when the library terminates with an error code in case an error is triggered on a non-master rank and the error message isn't seen on the terminal. TL;DR; I'd like a new stream suppression option in libMesh, perhaps "--suppress-parallel-output"? |
From: Roy S. <roy...@ic...> - 2016-08-16 22:25:30
|
On Mon, 15 Aug 2016, John Peterson wrote: > > So is there any reason we shouldn't just bite the bullet, give the > > DofMap a MeshBase& at construction time, and deprecate that > > argument in the methods which use it? > > Sounds reasonable to me. The only reason off the top of my head > would be if it was a common pattern for one to build/manage DofMaps > manually, but this definitely is not the case AFAIK. I hit a reason in https://github.com/idaholab/moose/issues/7562 - but I don't think this is a showstopper. It requires considerable effort to destroy your Mesh before you destroy the DofMap, and if anyone else manages to pull that off too, they'll probably have a similarly short fix. --- Roy |
From: Roy S. <roy...@ic...> - 2016-08-15 17:15:09
|
On Mon, 15 Aug 2016, Kirk, Benjamin (JSC-EG311) wrote: > Might be a slight complication for unit testing but that's no reason not to do the right thing... > > On Aug 15, 2016, at 12:01 PM, John Peterson <jwp...@gm...> wrote: > > > Sounds reasonable to me. The only reason off the top of my head > > would be if it was a common pattern for one to build/manage > > DofMaps manually, but this definitely is not the case AFAIK. Okay, thanks; I'll make the change along with my next PR. I just now noticed the last straw: DofMap::_sys_number. So not only would a single DofMap only be useful on two meshes if they were identical, it would only be shareable between two systems with the same number. One of these days I imagine tearing through libMesh and changing APIs to turn as many internal two-way dependencies into one-way dependencies as possible, but that day is not today. --- Roy |
From: Kirk, B. (JSC-EG311) <ben...@na...> - 2016-08-15 17:06:54
|
Might be a slight complication for unit testing but that's no reason not to do the right thing... On Aug 15, 2016, at 12:01 PM, John Peterson <jwp...@gm...<mailto:jwp...@gm...>> wrote: On Mon, Aug 15, 2016 at 10:57 AM, Roy Stogner <roy...@ic...<mailto:roy...@ic...>> wrote: We take MeshBase& as a necessary argument for half a dozen functions, we're going to need a MeshBase for the new ghosting functor APIs too... Plus, we're not actually capable of handling multiple mesh objects from the same DofMap except in the case where those meshes are basically identical: we can attach SparseMatrix objects and sparsity + send_list augmentation objects that are ridiculously mesh-dependent. So is there any reason we shouldn't just bite the bullet, give the DofMap a MeshBase& at construction time, and deprecate that argument in the methods which use it? Sounds reasonable to me. The only reason off the top of my head would be if it was a common pattern for one to build/manage DofMaps manually, but this definitely is not the case AFAIK. -- John ------------------------------------------------------------------------------ What NetFlow Analyzer can do for you? Monitors network bandwidth and traffic patterns at an interface-level. Reveals which users, apps, and protocols are consuming the most bandwidth. Provides multi-vendor support for NetFlow, J-Flow, sFlow and other flows. Make informed decisions using capacity planning reports. http://sdm.link/zohodev2dev _______________________________________________ Libmesh-devel mailing list Lib...@li...<mailto:Lib...@li...> https://lists.sourceforge.net/lists/listinfo/libmesh-devel |
From: John P. <jwp...@gm...> - 2016-08-15 17:01:16
|
On Mon, Aug 15, 2016 at 10:57 AM, Roy Stogner <roy...@ic...> wrote: > > We take MeshBase& as a necessary argument for half a dozen functions, > we're going to need a MeshBase for the new ghosting functor APIs > too... > > Plus, we're not actually capable of handling multiple mesh objects > from the same DofMap except in the case where those meshes are > basically identical: we can attach SparseMatrix objects and sparsity + > send_list augmentation objects that are ridiculously mesh-dependent. > > So is there any reason we shouldn't just bite the bullet, give the > DofMap a MeshBase& at construction time, and deprecate that argument > in the methods which use it? > Sounds reasonable to me. The only reason off the top of my head would be if it was a common pattern for one to build/manage DofMaps manually, but this definitely is not the case AFAIK. -- John |
From: Roy S. <roy...@ic...> - 2016-08-15 16:58:05
|
We take MeshBase& as a necessary argument for half a dozen functions, we're going to need a MeshBase for the new ghosting functor APIs too... Plus, we're not actually capable of handling multiple mesh objects from the same DofMap except in the case where those meshes are basically identical: we can attach SparseMatrix objects and sparsity + send_list augmentation objects that are ridiculously mesh-dependent. So is there any reason we shouldn't just bite the bullet, give the DofMap a MeshBase& at construction time, and deprecate that argument in the methods which use it? --- Roy |
From: Roy S. <roy...@ic...> - 2016-08-09 23:17:43
|
On Tue, 9 Aug 2016, Derek Gaston wrote: > Ok - How about I just add a function for now to return the group > number given a variable number? That would at least let me get the > information... That would be my preference, yeah. --- Roy |
From: Derek G. <fri...@gm...> - 2016-08-09 23:08:46
|
Ok - How about I just add a function for now to return the group number given a variable number? That would at least let me get the information... Currently var_to_vg() is only on DofObjects and is private... Derek On Tue, Aug 9, 2016 at 7:05 PM Roy Stogner <roy...@ic...> wrote: > > > On Sun, 7 Aug 2016, Derek Gaston wrote: > > > Well - one thing that makes this worse is that > > System::add_variables() (Note the "s") returns the variable ID of > > the last variable added when you're adding a whole group (which is > > crazy not useful). > > You could backtrack to get the other variables' ids; it's useful... > but really weird. > > > I know that we delay group creation a bit so that groups can > > automatically be created so we may not know the group number at this > > point (I can't quite tell)... but what if that function is changed > > to return the VariableGroup object itself? Hold it as a reference we > > should be able to get the ID out of it later... > > With the *current* behavior, even with the identify_variable_groups > behavior, we ought to be able to get the current group number, because > our only optimization is to create groups from contiguously added > variables. > > I'd like to add an option to sort variables under the hood before > grouping, for use in multiphysics codes which don't do that sorting > themselves, but if we ever add that it won't be compatible with the > current return-a-number-immediately-from-add_variable behavior that > everyone relies on, so I won't worry about adding new incompatible > APIs either. > > Regardless, I'd rather add a new API to get group data rather than > changing the current APIs. > --- > Roy > |
From: Roy S. <roy...@ic...> - 2016-08-09 23:05:20
|
On Sun, 7 Aug 2016, Derek Gaston wrote: > Well - one thing that makes this worse is that > System::add_variables() (Note the "s") returns the variable ID of > the last variable added when you're adding a whole group (which is > crazy not useful). You could backtrack to get the other variables' ids; it's useful... but really weird. > I know that we delay group creation a bit so that groups can > automatically be created so we may not know the group number at this > point (I can't quite tell)... but what if that function is changed > to return the VariableGroup object itself? Hold it as a reference we > should be able to get the ID out of it later... With the *current* behavior, even with the identify_variable_groups behavior, we ought to be able to get the current group number, because our only optimization is to create groups from contiguously added variables. I'd like to add an option to sort variables under the hood before grouping, for use in multiphysics codes which don't do that sorting themselves, but if we ever add that it won't be compatible with the current return-a-number-immediately-from-add_variable behavior that everyone relies on, so I won't worry about adding new incompatible APIs either. Regardless, I'd rather add a new API to get group data rather than changing the current APIs. --- Roy |