You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(22) 
_{Aug}
(22) 
_{Sep}

_{Oct}
(17) 
_{Nov}
(20) 
_{Dec}
(12) 
2015 
_{Jan}
(25) 
_{Feb}
(2) 
_{Mar}
(16) 
_{Apr}
(13) 
_{May}
(21) 
_{Jun}
(5) 
_{Jul}

_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 





1
(3) 
2
(1) 
3

4

5

6

7
(5) 
8
(6) 
9

10

11

12
(1) 
13
(1) 
14

15
(5) 
16
(5) 
17

18

19
(1) 
20
(6) 
21

22

23
(2) 
24

25

26

27

28
(5) 
29
(2) 
30


From: Roy Stogner <roystgnr@ic...>  20110929 17:04:08

On Wed, 28 Sep 2011, Kirk, Benjamin (JSCEG311) wrote: > So am I right that the most pressing library change required is to > update the documentation? Sounds like. Personally *I* think the API is selfdocumenting: if the argument is nonconst then of course we're going to change it! ;) I guess the Doxygen comments really ought to specify why and to what it's being changed, though.  Roy 
From: David Andrs <David.A<ndrs@in...>  20110929 14:49:59

Roy Stogner <roystgnr@...> wrote on 09/28/2011 05:02:21 PM: > > On Wed, 28 Sep 2011, David Andrs wrote: > > > We ran into an issue with our framework here at INL when forming > full jacobian matrix as a preconditioner with AMR. What we hit > > in our problem was the different size of the dense matrix (local > jacobian contributions) and the dof indices array. I traced the > > problem down to the level of DofMap::constrain_element_matrix. I > found out that this method can change the dof indices array > > (actually the method that changes the dof indices array is > DofMap::build_constraint_matrix.) and that triggered an assert in > > libMesh (attached is a modified ex19 that exhibits this behavior). > This only happens when there are constrains involved. I quess > > that no one bumped into this since people usually call this > constrain method just once, but we call it multiple times and trying > > to reuse those arrays. > > I've run into this before; my workaround was to copy the array before > calling the constrain method. > > > So my question: Is this known behavior? And is this used somewhere > else in libMesh, like we call that constrain_element_matrix > > and we use this modified array for something later on? > > It's known behavior (in fact we've now got DofMap::constrain_nothing() > whose sole purpose is this behavior), and IIRC I've talked to users > with app code depending on this behavior, but I don't think it was > originally *desired* behavior for its own sake; it's just that > build_constraint_matrix() needs to expand that vector out recursively, > so that method itself "reuses" the modified array. > > Whether that recursive use counts as "somewhere else" is a matter of > perspective, but I think Ben made a fine design choice here: if > build_constraint matrix did avoid modifying its input argument then it > would have to keep both the old and new vector, and whenever the > constraint recursion went N levels deep we'd have N different vectors > floating around. > > Whereas even for our user codes where the original unexpanded vector > does get needed again afterwards, we can get away with only one extra > vector copy and one extra line of code to make the copy. > > On the other hand, I'm sure nobody's benchmarked both alternatives (or > the alternatives where build_constraint_matrix uses std::set, or the > alternative where a nonrecursive "userlevel" build_constraint_matrix > method makes a vector copy and calls a recursive "librarylevel" > method...), and I doubt it makes a noticeable performance difference. > > The gripping hand is that at this point I wouldn't want to change the > old method in any case (see: "app code may depend on this behavior"). > I don't think it's worth adding a new method either, unless there's > some flaw I'm overlooking in my workaround? >  > Roy Thanks, Roy, for the explanation. I wanted to ask before I take any action on the fix. The workaround you suggested above was one of the solutions we came up with.  David 
From: Kirk, Benjamin (JSCEG311) <benjamin.kirk1@na...>  20110928 23:51:44

(sorry to topreply, damn phone...) I also was originally planning for the most common case  in fact there are no constraints for the current element, hence no copy is required... So am I right that the most pressing library change required is to update the documentation? Ben On Sep 28, 2011, at 6:02 PM, "Roy Stogner"nc <roystgnr@...>bal wrote: > > On Wed, 28 Sep 2011, David Andrs wrote: > >> We ran into an issue with our framework here at INL when forming full jacobian matrix as a preconditioner with AMR. What we hit >> in our problem was the different size of the dense matrix (local jacobian contributions) and the dof indices array. I traced the >> problem down to the level of DofMap::constrain_element_matrix. I found out that this method can change the dof indices array >> (actually the method that changes the dof indices array is DofMap::build_constraint_matrix.) and that triggered an assert in >> libMesh (attached is a modified ex19 that exhibits this behavior). This only happens when there are constrains involved. I quess >> that no one bumped into this since people usually call this constrain method just once, but we call it multiple times and trying >> to reuse those arrays. > > I've run into this before; my workaround was to copy the array before > calling the constrain method. > >> So my question: Is this known behavior? And is this used somewhere else in libMesh, like we call that constrain_element_matrix >> and we use this modified array for something later on? > > It's known behavior (in fact we've now got DofMap::constrain_nothing() > whose sole purpose is this behavior), and IIRC I've talked to users > with app code depending on this behavior, but I don't think it was > originally *desired* behavior for its own sake; it's just that > build_constraint_matrix() needs to expand that vector out recursively, > so that method itself "reuses" the modified array. > > Whether that recursive use counts as "somewhere else" is a matter of > perspective, but I think Ben made a fine design choice here: if > build_constraint matrix did avoid modifying its input argument then it > would have to keep both the old and new vector, and whenever the > constraint recursion went N levels deep we'd have N different vectors > floating around. > > Whereas even for our user codes where the original unexpanded vector > does get needed again afterwards, we can get away with only one extra > vector copy and one extra line of code to make the copy. > > On the other hand, I'm sure nobody's benchmarked both alternatives (or > the alternatives where build_constraint_matrix uses std::set, or the > alternative where a nonrecursive "userlevel" build_constraint_matrix > method makes a vector copy and calls a recursive "librarylevel" > method...), and I doubt it makes a noticeable performance difference. > > The gripping hand is that at this point I wouldn't want to change the > old method in any case (see: "app code may depend on this behavior"). > I don't think it's worth adding a new method either, unless there's > some flaw I'm overlooking in my workaround? >  > Roy > >  > All the data continuously generated in your IT infrastructure contains a > definitive record of customers, application performance, security > threats, fraudulent activity and more. Splunk takes this data and makes > sense of it. Business sense. IT sense. Common sense. > http://p.sf.net/sfu/splunkd2dcopy1 > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Roy Stogner <roystgnr@ic...>  20110928 23:02:29

On Wed, 28 Sep 2011, David Andrs wrote: > We ran into an issue with our framework here at INL when forming full jacobian matrix as a preconditioner with AMR. What we hit > in our problem was the different size of the dense matrix (local jacobian contributions) and the dof indices array. I traced the > problem down to the level of DofMap::constrain_element_matrix. I found out that this method can change the dof indices array > (actually the method that changes the dof indices array is DofMap::build_constraint_matrix.) and that triggered an assert in > libMesh (attached is a modified ex19 that exhibits this behavior). This only happens when there are constrains involved. I quess > that no one bumped into this since people usually call this constrain method just once, but we call it multiple times and trying > to reuse those arrays. I've run into this before; my workaround was to copy the array before calling the constrain method. > So my question: Is this known behavior? And is this used somewhere else in libMesh, like we call that constrain_element_matrix > and we use this modified array for something later on? It's known behavior (in fact we've now got DofMap::constrain_nothing() whose sole purpose is this behavior), and IIRC I've talked to users with app code depending on this behavior, but I don't think it was originally *desired* behavior for its own sake; it's just that build_constraint_matrix() needs to expand that vector out recursively, so that method itself "reuses" the modified array. Whether that recursive use counts as "somewhere else" is a matter of perspective, but I think Ben made a fine design choice here: if build_constraint matrix did avoid modifying its input argument then it would have to keep both the old and new vector, and whenever the constraint recursion went N levels deep we'd have N different vectors floating around. Whereas even for our user codes where the original unexpanded vector does get needed again afterwards, we can get away with only one extra vector copy and one extra line of code to make the copy. On the other hand, I'm sure nobody's benchmarked both alternatives (or the alternatives where build_constraint_matrix uses std::set, or the alternative where a nonrecursive "userlevel" build_constraint_matrix method makes a vector copy and calls a recursive "librarylevel" method...), and I doubt it makes a noticeable performance difference. The gripping hand is that at this point I wouldn't want to change the old method in any case (see: "app code may depend on this behavior"). I don't think it's worth adding a new method either, unless there's some flaw I'm overlooking in my workaround?  Roy 
From: David Andrs <David.A<ndrs@in...>  20110928 22:26:13

/* $Id: ex4.C 2501 20071120 02:33:29Z benkirk $ */ /* The Next Great Finite Element Library. */ /* Copyright (C) 2003 Benjamin S. Kirk */ /* This library is free software; you can redistribute it and/or */ /* modify it under the terms of the GNU Lesser General Public */ /* License as published by the Free Software Foundation; either */ /* version 2.1 of the License, or (at your option) any later version. */ /* This library is distributed in the hope that it will be useful, */ /* but WITHOUT ANY WARRANTY; without even the implied warranty of */ /* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU */ /* Lesser General Public License for more details. */ /* You should have received a copy of the GNU Lesser General Public */ /* License along with this library; if not, write to the Free Software */ /* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 021111307 USA */ // <h1>Example 19  Solving the 2D Young Laplace Problem using nonlinear solvers</h1> // // This example shows how to use the NonlinearImplicitSystem class // to efficiently solve nonlinear problems in parallel. // // In nonlinear systems, we aim at finding x that satisfy R(x) = 0. // In nonlinear finite element analysis, the residual is typically // of the form R(x) = K(x)*x  f, with K(x) the system matrix and f // the "righthandside". The NonlinearImplicitSystem class expects // two callback functions to compute the residual R and its Jacobian // for the Newton iterations. Here, we just approximate // the true Jacobian by K(x). // // You can turn on preconditining of the matrix free system using the // jacobian by passing "pre" on the command line. Currently this only // work with Petsc so this isn't used by using "make run" // // This example also runs with the experimental Trilinos NOX solvers by specifying // the usetrilinos command line argument. // C++ include files that we need #include <iostream> #include <algorithm> #include <cmath> // Various include files needed for the mesh & solver functionality. #include "libmesh.h" #include "mesh.h" #include "mesh_refinement.h" #include "exodusII_io.h" #include "equation_systems.h" #include "fe.h" #include "quadrature_gauss.h" #include "dof_map.h" #include "sparse_matrix.h" #include "numeric_vector.h" #include "dense_matrix.h" #include "dense_vector.h" #include "elem.h" #include "string_to_enum.h" #include "getpot.h" #include "dense_submatrix.h" #include "dense_subvector.h" #include "mesh_generation.h" #include "coupling_matrix.h" #include "boundary_info.h" #include "mesh_refinement.h" #include "error_vector.h" #include "kelly_error_estimator.h" // The nonlinear solver and system we will be using #include "nonlinear_solver.h" #include "nonlinear_implicit_system.h" // Necessary for programmatically setting petsc options #ifdef LIBMESH_HAVE_PETSC #include <petsc.h> #endif // Bring in everything from the libMesh namespace using namespace libMesh; // A reference to our equation system EquationSystems *_equation_system = NULL; // Lets define the physical parameters of the equation const Real kappa = 1.; const Real sigma = 0.2; // This function computes the Jacobian K(x) void compute_jacobian (const NumericVector<Number>& soln, SparseMatrix<Number>& jacobian, NonlinearImplicitSystem& sys) { // Get a reference to the equation system. EquationSystems &es = *_equation_system; // Get a constant reference to the mesh object. const MeshBase& mesh = es.get_mesh(); // The dimension that we are running const unsigned int dim = mesh.mesh_dimension(); // Get a reference to the NonlinearImplicitSystem we are solving NonlinearImplicitSystem& system = es.get_system<NonlinearImplicitSystem>("LaplaceYoung"); const unsigned int u_var = system.variable_number ("u"); const unsigned int v_var = system.variable_number ("v"); // A reference to the \p DofMap object for this system. The \p DofMap // object handles the index translation from node and element numbers // to degree of freedom numbers. We will talk more about the \p DofMap // in future examples. const DofMap& dof_map = system.get_dof_map(); // Get a constant reference to the Finite Element type // for the first (and only) variable in the system. FEType fe_type = dof_map.variable_type(0); // Build a Finite Element object of the specified type. Since the // \p FEBase::build() member dynamically creates memory we will // store the object as an \p AutoPtr<FEBase>. This can be thought // of as a pointer that will clean up after itself. AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); // A 5th order Gauss quadrature rule for numerical integration. QGauss qrule (dim, FIFTH); // Tell the finite element object to use our quadrature rule. fe>attach_quadrature_rule (&qrule); // Here we define some references to cellspecific data that // will be used to assemble the linear system. // We begin with the element Jacobian * quadrature weight at each // integration point. const std::vector<Real>& JxW = fe>get_JxW(); // The element shape functions evaluated at the quadrature points. const std::vector<std::vector<Real> >& phi = fe>get_phi(); // The element shape function gradients evaluated at the quadrature // points. const std::vector<std::vector<RealGradient> >& dphi = fe>get_dphi(); // Define data structures to contain the Jacobian element matrix. // Following basic finite element terminology we will denote these // "Ke". More detail is in example 3. // DenseMatrix<Number> Ke; // DenseMatrix<Number> Kuu(Ke), Kvv(Ke), Kuv(Ke); DenseMatrix<Number> Kuu, Kvv, Kuv; // This vector will hold the degree of freedom indices for // the element. These define where in the global system // the element degrees of freedom get mapped. std::vector<unsigned int> dof_indices; std::vector<unsigned int> dof_indices_u; std::vector<unsigned int> dof_indices_v; // Now we will loop over all the active elements in the mesh which // are local to this processor. // We will compute the element Jacobian contribution. MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); for ( ; el != end_el; ++el) { // Store a pointer to the element we are currently // working on. This allows for nicer syntax later. const Elem* elem = *el; // Get the degree of freedom indices for the // current element. These define where in the global // matrix and righthandside this element will // contribute to. dof_map.dof_indices (elem, dof_indices); dof_map.dof_indices (elem, dof_indices_u, u_var); dof_map.dof_indices (elem, dof_indices_v, v_var); const unsigned int n_dofs = dof_indices.size(); const unsigned int n_u_dofs = dof_indices_u.size(); const unsigned int n_v_dofs = dof_indices_v.size(); // Compute the elementspecific data for the current // element. This involves computing the location of the // quadrature points (q_point) and the shape functions // (phi, dphi) for the current element. fe>reinit (elem); // Zero the element Jacobian before // summing them. We use the resize member here because // the number of degrees of freedom might have changed from // the last element. Note that this will be the case if the // element type is different (i.e. the last element was a // triangle, now we are on a quadrilateral). // Ke.resize (dof_indices.size(), // dof_indices.size()); // Kuu.reposition( 0, 0, n_u_dofs, n_u_dofs); // Kuv.reposition(n_u_dofs, 0, n_u_dofs, n_v_dofs); // Kvv.reposition(n_u_dofs, n_u_dofs, n_v_dofs, n_v_dofs); Kuu.resize(n_u_dofs, n_u_dofs); Kuv.resize(n_u_dofs, n_v_dofs); Kvv.resize(n_v_dofs, n_v_dofs); // Now we will build the element Jacobian. This involves // a double loop to integrate the test funcions (i) against // the trial functions (j). Note that the Jacobian depends // on the current solution x, which we access using the soln // vector. // for (unsigned int qp=0; qp<qrule.n_points(); qp++) { Gradient grad_u; Gradient grad_v; for (unsigned int i=0; i<phi.size(); i++) { grad_u += dphi[i][qp]*soln(dof_indices_u[i]); grad_v += dphi[i][qp]*soln(dof_indices_v[i]); } for (unsigned int i=0; i<phi.size(); i++) for (unsigned int j=0; j<phi.size(); j++) { Kuu(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp]); Kvv(i,j) += JxW[qp]*(dphi[i][qp]*dphi[j][qp]); } } // dof_map.constrain_element_matrix (Ke, dof_indices); dof_map.constrain_element_matrix (Kuu, dof_indices_u, dof_indices_u); dof_map.constrain_element_matrix (Kuv, dof_indices_u, dof_indices_v); dof_map.constrain_element_matrix (Kvv, dof_indices_v, dof_indices_v); jacobian.add_matrix (Kuu, dof_indices_u, dof_indices_u); jacobian.add_matrix (Kuv, dof_indices_u, dof_indices_v); jacobian.add_matrix (Kvv, dof_indices_v, dof_indices_v); // Add the element matrix to the system Jacobian. // jacobian.add_matrix (Ke, dof_indices); } jacobian.close(); std::vector<int> rows; std::vector<unsigned int> nl; std::vector<short int> il; mesh.boundary_info>build_node_list(nl, il); for (int ii = 0; ii < nl.size(); ii++) { const Node & nd = mesh.node(nl[ii]); if (il[ii] == 1  il[ii] == 3) { unsigned int idx = nd.dof_number(system.number(), u_var, 0); rows.push_back(idx); } } jacobian.zero_rows(rows, 1); jacobian.close(); // That's it. } // Here we compute the residual R(x) = K(x)*x  f. The current solution // x is passed in the soln vector void compute_residual (const NumericVector<Number>& soln, NumericVector<Number>& residual, NonlinearImplicitSystem& sys) { EquationSystems &es = *_equation_system; // Get a constant reference to the mesh object. const MeshBase& mesh = es.get_mesh(); // The dimension that we are running const unsigned int dim = mesh.mesh_dimension(); libmesh_assert (dim == 2); // Get a reference to the NonlinearImplicitSystem we are solving NonlinearImplicitSystem& system = es.get_system<NonlinearImplicitSystem>("LaplaceYoung"); const unsigned int u_var = system.variable_number ("u"); const unsigned int v_var = system.variable_number ("v"); // A reference to the \p DofMap object for this system. The \p DofMap // object handles the index translation from node and element numbers // to degree of freedom numbers. We will talk more about the \p DofMap // in future examples. const DofMap& dof_map = system.get_dof_map(); // Get a constant reference to the Finite Element type // for the first (and only) variable in the system. FEType fe_type = dof_map.variable_type(0); // Build a Finite Element object of the specified type. Since the // \p FEBase::build() member dynamically creates memory we will // store the object as an \p AutoPtr<FEBase>. This can be thought // of as a pointer that will clean up after itself. AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); // A 5th order Gauss quadrature rule for numerical integration. QGauss qrule (dim, FIFTH); // Tell the finite element object to use our quadrature rule. fe>attach_quadrature_rule (&qrule); // Declare a special finite element object for // boundary integration. AutoPtr<FEBase> fe_face (FEBase::build(dim, fe_type)); // Boundary integration requires one quadraure rule, // with dimensionality one less than the dimensionality // of the element. QGauss qface(dim1, FIFTH); // Tell the finte element object to use our // quadrature rule. fe_face>attach_quadrature_rule (&qface); // Here we define some references to cellspecific data that // will be used to assemble the linear system. // We begin with the element Jacobian * quadrature weight at each // integration point. const std::vector<Real>& JxW = fe>get_JxW(); // The element shape functions evaluated at the quadrature points. const std::vector<std::vector<Real> >& phi = fe>get_phi(); // The element shape function gradients evaluated at the quadrature // points. const std::vector<std::vector<RealGradient> >& dphi = fe>get_dphi(); // Define data structures to contain the resdual contributions DenseVector<Number> Re; DenseSubVector<Number> Re_u(Re), Re_v(Re); // This vector will hold the degree of freedom indices for // the element. These define where in the global system // the element degrees of freedom get mapped. std::vector<unsigned int> dof_indices; std::vector<unsigned int> dof_indices_u, dof_indices_v; // Now we will loop over all the active elements in the mesh which // are local to this processor. // We will compute the element residual. residual.zero(); MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); for ( ; el != end_el; ++el) { // Store a pointer to the element we are currently // working on. This allows for nicer syntax later. const Elem* elem = *el; // Get the degree of freedom indices for the // current element. These define where in the global // matrix and righthandside this element will // contribute to. dof_map.dof_indices (elem, dof_indices); dof_map.dof_indices (elem, dof_indices_u, u_var); dof_map.dof_indices (elem, dof_indices_v, v_var); const unsigned int n_dofs = dof_indices.size(); const unsigned int n_u_dofs = dof_indices_u.size(); const unsigned int n_v_dofs = dof_indices_v.size(); // Compute the elementspecific data for the current // element. This involves computing the location of the // quadrature points (q_point) and the shape functions // (phi, dphi) for the current element. fe>reinit (elem); // We use the resize member here because // the number of degrees of freedom might have changed from // the last element. Note that this will be the case if the // element type is different (i.e. the last element was a // triangle, now we are on a quadrilateral). Re.resize (dof_indices.size()); Re_u.reposition(u_var * n_u_dofs, n_u_dofs); Re_v.reposition(v_var * n_u_dofs, n_v_dofs); // Now we will build the residual. This involves // the construction of the matrix K and multiplication of it // with the current solution x. We rearrange this into two loops: // In the first, we calculate only the contribution of // K_ij*x_j which is independent of the row i. In the second loops, // we multiply with the rowdependent part and add it to the element // residual. for (unsigned int qp=0; qp<qrule.n_points(); qp++) { Number u = 0; Gradient grad_u; Number v = 0; Gradient grad_v; for (unsigned int j=0; j<phi.size(); j++) { u += phi[j][qp]*soln(dof_indices_u[j]); grad_u += dphi[j][qp]*soln(dof_indices_u[j]); v += phi[j][qp]*soln(dof_indices_v[j]); grad_v += dphi[j][qp]*soln(dof_indices_v[j]); } for (unsigned int i=0; i<phi.size(); i++) { Re_u(i) += JxW[qp]*(dphi[i][qp]*grad_u); Re_v(i) += JxW[qp]*(dphi[i][qp]*grad_v); } } // At this point the interior element integration has // been completed. However, we have not yet addressed // boundary conditions. // // The following loops over the sides of the element. // // If the element has no neighbor on a side then that // // side MUST live on a boundary of the domain. // for (unsigned int side=0; side<elem>n_sides(); side++) // if (elem>neighbor(side) == NULL) // { // // The value of the shape functions at the quadrature // // points. // const std::vector<std::vector<Real> >& phi_face = fe_face>get_phi(); // // // The Jacobian * Quadrature Weight at the quadrature // // points on the face. // const std::vector<Real>& JxW_face = fe_face>get_JxW(); // // // Compute the shape function values on the element face. // fe_face>reinit(elem, side); // // // Loop over the face quadrature points for integration. // for (unsigned int qp=0; qp<qface.n_points(); qp++) // { // // This is the righthandside contribution (f), // // which has to be subtracted from the current residual // for (unsigned int i=0; i<phi_face.size(); i++) // Re_u(i) = JxW_face[qp]*sigma*phi_face[i][qp]; // } // } dof_map.constrain_element_vector (Re, dof_indices); residual.add_vector (Re, dof_indices); } residual.close(); std::vector<unsigned int> nl; std::vector<short int> il; mesh.boundary_info>build_node_list(nl, il); for (int ii = 0; ii < nl.size(); ii++) { const Node & nd = mesh.node(nl[ii]); if (il[ii] == 1) { unsigned int idx = nd.dof_number(system.number(), u_var, 0); residual.set(idx, soln(idx)  1.); } else if (il[ii] == 3) { unsigned int idx = nd.dof_number(system.number(), u_var, 0); residual.set(idx, soln(idx)  2.); } } residual.close(); // That's it. } // Begin the main program. int main (int argc, char** argv) { // Initialize libMesh and any dependent libaries, like in example 2. LibMeshInit init (argc, argv); #if !defined(LIBMESH_HAVE_PETSC) && !defined(LIBMESH_HAVE_TRILINOS) if (libMesh::processor_id() == 0) std::cerr << "ERROR: This example requires libMesh to be\n" << "compiled with nonlinear solver support from\n" << "PETSc or Trilinos!" << std::endl; return 0; #endif #ifndef LIBMESH_ENABLE_AMR if (libMesh::processor_id() == 0) std::cerr << "ERROR: This example requires libMesh to be\n" << "compiled with AMR support!" << std::endl; return 0; #else // Create a GetPot object to parse the command line GetPot command_line (argc, argv); // Skip this 2D example if libMesh was compiled as 1Donly. libmesh_example_assert(2 <= LIBMESH_DIM, "2D support"); // Create a mesh from file. Mesh mesh; MeshTools::Generation::build_square(mesh, 2, 2, 0, 1, 0, 1, QUAD4); mesh.boundary_info>build_node_list_from_side_list(); // Print information about the mesh to the screen. mesh.print_info(); // Create an equation systems object. EquationSystems equation_systems (mesh); _equation_system = &equation_systems; // Declare the system and its variables. // Creates a system named "LaplaceYoung" NonlinearImplicitSystem& system = equation_systems.add_system<NonlinearImplicitSystem> ("LaplaceYoung"); // Here we specify the tolerance for the nonlinear solver and // the maximum of nonlinear iterations. equation_systems.parameters.set<Real> ("nonlinear solver tolerance") = 1.e12; equation_systems.parameters.set<unsigned int> ("nonlinear solver maximum iterations") = 50; // Adds the variable "u" to "LaplaceYoung". "u" // will be approximated using secondorder approximation. system.add_variable("u", FIRST, LAGRANGE); system.add_variable("v", FIRST, LAGRANGE); CouplingMatrix cm(2); cm(0, 0) = 1; cm(1, 1) = 1; cm(0, 1) = 1; system.get_dof_map()._dof_coupling = &cm; // Give the system a pointer to the functions that update // the residual and Jacobian. system.nonlinear_solver>residual = compute_residual; system.nonlinear_solver>jacobian = compute_jacobian; // Initialize the data structures for the equation system. equation_systems.init(); // Prints information about the system to the screen. equation_systems.print_info(); MeshRefinement mesh_refinement (mesh); const unsigned int max_r_steps = 5; for (unsigned int r_step=0; r_step<max_r_steps; r_step++) { // Solve the system "LaplaceYoung", print the number of iterations // and final residual equation_systems.get_system("LaplaceYoung").solve(); // Print out final convergence information. This duplicates some // output from during the solve itself, but demonstrates another way // to get this information after the solve is complete. std::cout << "LaplaceYoung system solved at nonlinear iteration " << system.n_nonlinear_iterations() << " , final nonlinear residual norm: " << system.final_nonlinear_residual() << std::endl; if (r_step+1 != max_r_steps) { std::cout << " Refining the mesh..." << std::endl; ErrorVector error; KellyErrorEstimator error_estimator; error_estimator.estimate_error (system, error); mesh_refinement.refine_fraction() = 0.01; mesh_refinement.coarsen_fraction() = 0.0; mesh_refinement.max_h_level() = 5; mesh_refinement.flag_elements_by_error_fraction (error); mesh_refinement.refine_and_coarsen_elements(); equation_systems.reinit (); mesh.boundary_info>build_node_list_from_side_list(); } } #ifdef LIBMESH_HAVE_EXODUS_API // After solving the system write the solution ExodusII_IO (mesh).write_equation_systems ("out.e", equation_systems); #endif // #ifdef LIBMESH_HAVE_EXODUS_API #endif // #ifndef LIBMESH_ENABLE_AMR // All done. return 0; } 
From: David Andrs <David.A<ndrs@in...>  20110928 15:39:28

Roy Stogner <roystgnr@...> wrote on 09/20/2011 08:01:23 AM: > > On Tue, 20 Sep 2011, Minq Q wrote: > > > When I tried to config libmesh with trilinos (version 4847) I got > this error: > > > > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile. > export.Trilinos... yes > > <<< Configuring library with Trilinos 10 support >>> > > checking for /prog/trilinos/10.4.1/mpi/opt/include/AztecOO_config.h... yes > > <<< Configuring library with AztecOO support >>> > > checking for /prog/trilinos/10.4.1/mpi/opt/include/NOX_Config.h... yes > > <<< Configuring library with NOX support >>> > > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile. > export.aztecoo... no > > checking for /prog/trilinos/10.4. > 1/mpi/opt/packages/aztecoo/Makefile.export.aztecoo... no > > > > Should it be > > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile. > export.AztecOO... > > instead of > > ...aztecoo ? > > Hmm... the new system is supposed to work by checking for > Makefile.export.Trilinos (for Trilinos 10 support) and then if that's > not found checking for Makefile.export.{aztecoo,nox} (which really are > lower case, for Trilinos 8 and 9 support). We just must have > forgotten to put the latter check in a conditional  that should be > easy to fix. > > It should be working, anyways, even if the configure log messages are > misleading, right? Fixed it: If trilinos 10 is found, we do not look for trilinos 9. It is in SVN head.  David Andrs 
From: David Andrs <David.A<ndrs@in...>  20110928 15:37:58

Roy Stogner <roystgnr@...> wrote on 09/23/2011 01:53:58 PM: > > > On Tue, 20 Sep 2011, Roy Stogner wrote: > > > There's a bug here: you forgot to change PetscNonlinearSolver to > > NoxNonlinearSolver in a couple START_LOG lines. > > One more serious bug, too: there's a use of libmesh_cast_ref which > breaks for me in optimized modes. Apparently (at least in Trilinos > 9.0.3) the Epetra_Operator is a virtual base class of > Epetra_FECrsMatrix, so we're not allowed to optimize that into a > static_cast. Replacing libmesh_cast_ref with dynamic_cast fixes > compilation for me. Fixed both issues. Also found another one: returning total number of _lin._ iterations instead of _nonlinear_ ones. While doing that, I implemented get_total_linear_iterations for Nox solver as mentioned in another thread here. All is checked in.  David 
From: Roy Stogner <roystgnr@ic...>  20110923 19:54:07

On Tue, 20 Sep 2011, Roy Stogner wrote: > There's a bug here: you forgot to change PetscNonlinearSolver to > NoxNonlinearSolver in a couple START_LOG lines. One more serious bug, too: there's a use of libmesh_cast_ref which breaks for me in optimized modes. Apparently (at least in Trilinos 9.0.3) the Epetra_Operator is a virtual base class of Epetra_FECrsMatrix, so we're not allowed to optimize that into a static_cast. Replacing libmesh_cast_ref with dynamic_cast fixes compilation for me.  Roy 
From: Roy Stogner <roystgnr@ic...>  20110923 16:58:15

This was originally an offlist discussion, but after writing this reply I realized that my last two paragraphs are of interest to other developers, and it's possible that any of the rest might inspire ideas from other users:  Forwarded message  Date: Fri, 23 Sep 2011 11:53:10 0500 (CDT) From: Roy Stogner <roystgnr@...> To: Truman Ellis <truman@...> Subject: Re: Allocating RHS On Fri, 23 Sep 2011, Truman Ellis wrote: > I've run into an issue with my local solve for the optimal test functions. I > am going to need to assemble multiple right hand > sides corresponding to each trial variable and degree of freedom. My idea was > to create a > std::vector<std::vector<DenseVector<Number> > > test_F > where the first index corresponds to the index of the trial variable, and the > second to the index of the DOFs for that variable. We're in a coupled system, so at both global and local levels it's not hard to get unique DoF indices even when considering all variables at once. But I guess the extra level of vector<> indirection is likely to lead to simpler code; I've done it for that reason elsewhere. > I was going to create a corresponding > std::vector<std::vector<std::vectpr<DenseSubVector<Number> *> > > test_subF; > > The problem is that in the constructor of DPGContext, no particular element > has selected and thus dof_indices_var[v].size() is > zero for each variable v. This also raises issues of how to handle p > refinement. Don't forget hybrid meshes. Even with constant p the number of DoFs can change every time the geometric element type does. > I know elem_residuals just does a resize, but since we will have a > variable number of test_subF[v][d].push_back(new > DenseSubVector<Number>(test_F)) we would probably have to > dynamically allocate and delete memory. This doesn't sound like the > optimal solution. phi, dphi, etc. all do resizes too. It doesn't sound optimal, but nothing bad has ever shown up in benchmarking. On the other hand, there's only two levels of resize there and having to do even more might make a difference. With one level of resize there wouldn't have been a problem at all, since std::vector typically doesn't *really* ever reduce its own size, it just leaves reserved memory for future increases. > Can you think of anything better? boost::multi_array would at least have the memory allocations happen in big contiguous blocks rather than a bunch of little allocations. But I don't know if boost is "smart" enough to leave the stride length unchanged when doing reductive resizes (since for most people the efficiency tradeoffs of that kind of memory reservation would no longer be smart as you go past 1D). What I've always wanted to do eventually is encapsulate the needs of libMesh multiarrays into a short little inline/templated class, then try using that class as a shim to vector<vector<> >, multi_array, Eigen, etc. to see what works faster. My suggestion to you would be to stick with nested vector<> for now; if performance benchmarking shows a problem with that then we'll change "eventually" to "right now".  Roy 
From: Derek Gaston <friedmud@gm...>  20110920 19:22:33

On Sep 20, 2011, at 1:02 PM, Roy Stogner wrote: > >> Notes on preconditioners: I'm not an expert here, but there are only >> two preconds implemented: ILU (thru IFPACK) and AMG (thru ML). >> IFPACK supports a lot more than just ILU, however there are no >> PRECOND_ enum entries for those and I'm not quite sure if there >> should be one. What would be a good thing to do? Add new ENUM >> entries and support those? > > I think so. Derek ought to have the last word there, though; he's > more on top of the Preconditioner classes than I am. The only problem > is: We can add enums if possible… but I'd really rather try to match things in Trilinos to our current enums. There has to be an ASM for example… and an LU, etc… Derek 
From: Roy Stogner <roystgnr@ic...>  20110920 19:02:28

On Thu, 15 Sep 2011, David Andrs wrote: > 1) Improving AztecOO solver: >  using more of its builtin preconditioners >  using more solver types  like CG, BISTABCG > > 2) Improving NOX >  can do preconditioned JFNK (IFPACK must be enabled AFAIK) >  can do jacobian There's a bug here: you forgot to change PetscNonlinearSolver to NoxNonlinearSolver in a couple START_LOG lines. > 3) Adding supprt for ML package The configure part of the patch didn't apply for me here, presumably because I ran a bootstrap or someone else checked in a bootstrap using a different autoconf. Applying the rest of the patch and rebootstrapping worked. > Attached is also modified ex19, where I played and tested those > patches. There are commented lines with a few preconditioners to try > out (ILU, AMG). No time to play much with them myself, but I don't see any other obvious problems, if you're ready to commit the main patches. Trilinos support still isn't in great shape (anyone know why I'm seeing AMR convergence stall on ex14?) but you're at least uniformly improving it. :) > Notes on preconditioners: I'm not an expert here, but there are only > two preconds implemented: ILU (thru IFPACK) and AMG (thru ML). > IFPACK supports a lot more than just ILU, however there are no > PRECOND_ enum entries for those and I'm not quite sure if there > should be one. What would be a good thing to do? Add new ENUM > entries and support those? I think so. Derek ought to have the last word there, though; he's more on top of the Preconditioner classes than I am. The only problem is: > There is a possibility there is an overlap, but the names are just > slightly different: like (ICC in PETSc) and ICT in Trilinos. I do > not know. If someone knows, let me know or just build on top of my > patches. It would be nice to get that straightened out before adding new enums; otherwise any fix to consolidate the enums later would require a backwardsincompatible API change or a collection of "deprecated" enums in every test.  Roy 
From: Derek Gaston <friedmud@gm...>  20110920 14:29:42

On Sep 20, 2011, at 8:27 AM, Roy Stogner wrote: >> For now I think I'm going to add a get_total_linear_iterations() >> just to PetscNonlinearSolver so I can get my work done…. it will >> follow the path of get_converged_reason()…. > > This looks good to me. Long term we want to make this virtual and add > a Trilinos implementation. K  will do. Derek 
From: Roy Stogner <roystgnr@ic...>  20110920 14:28:13

On Mon, 19 Sep 2011, Derek Gaston wrote: > A while back we changed NonlinearSolver::solve() to return a > std::pair that included the total number of nonlinear iterations and > the final nonlinear residual. > > I'm now also wanting the total number of _linear_ iterations. > > What are we going to do here long term? > > This used to not be a problem because we didn't call clear() after > the solve (like we now do) so you could just get the snes object out > and inspect it after a solve… but that is no longer possible. Not an ideal way to do it anyway; I'd like people to be able to call abstract base class methods, not have to write packagespecific code for basic functionality. > Should I just modify the interface so it returns a struct? Or > should we use Boost::tuple? No, no need for an API change for this one. > Or should we save off this data as > class members that you can query with something like > NonlinearSolver::total_nonlinear_iterations(), > NonlinearSolver::total_linear_iterations() ??? Kind of like we have > get_converged_reason in PetscNonlinearSolver ? > > For now I think I'm going to add a get_total_linear_iterations() > just to PetscNonlinearSolver so I can get my work done…. it will > follow the path of get_converged_reason()…. This looks good to me. Long term we want to make this virtual and add a Trilinos implementation. Thanks,  Roy 
From: Roy Stogner <roystgnr@ic...>  20110920 14:01:36

On Tue, 20 Sep 2011, Minq Q wrote: > When I tried to config libmesh with trilinos (version 4847) I got this error: > > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.Trilinos... yes > <<< Configuring library with Trilinos 10 support >>> > checking for /prog/trilinos/10.4.1/mpi/opt/include/AztecOO_config.h... yes > <<< Configuring library with AztecOO support >>> > checking for /prog/trilinos/10.4.1/mpi/opt/include/NOX_Config.h... yes > <<< Configuring library with NOX support >>> > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.aztecoo... no > checking for /prog/trilinos/10.4.1/mpi/opt/packages/aztecoo/Makefile.export.aztecoo... no > > Should it be > checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.AztecOO... > instead of > ...aztecoo ? Hmm... the new system is supposed to work by checking for Makefile.export.Trilinos (for Trilinos 10 support) and then if that's not found checking for Makefile.export.{aztecoo,nox} (which really are lower case, for Trilinos 8 and 9 support). We just must have forgotten to put the latter check in a conditional  that should be easy to fix. It should be working, anyways, even if the configure log messages are misleading, right?  Roy 
From: Minq Q <micavn@gm...>  20110920 13:10:22

Hi, When I tried to config libmesh with trilinos (version 4847) I got this error: checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.Trilinos... yes <<< Configuring library with Trilinos 10 support >>> checking for /prog/trilinos/10.4.1/mpi/opt/include/AztecOO_config.h... yes <<< Configuring library with AztecOO support >>> checking for /prog/trilinos/10.4.1/mpi/opt/include/NOX_Config.h... yes <<< Configuring library with NOX support >>> checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.aztecoo... no checking for /prog/trilinos/10.4.1/mpi/opt/packages/aztecoo/Makefile.export.aztecoo... no Should it be checking for /prog/trilinos/10.4.1/mpi/opt/include/Makefile.export.AztecOO... instead of ...aztecoo ? Thanks, / Ming Q 
From: Derek Gaston <friedmud@gm...>  20110919 17:16:18

A while back we changed NonlinearSolver::solve() to return a std::pair that included the total number of nonlinear iterations and the final nonlinear residual. I'm now also wanting the total number of _linear_ iterations. What are we going to do here long term? This used to not be a problem because we didn't call clear() after the solve (like we now do) so you could just get the snes object out and inspect it after a solve… but that is no longer possible. Should I just modify the interface so it returns a struct? Or should we use Boost::tuple? Or should we save off this data as class members that you can query with something like NonlinearSolver::total_nonlinear_iterations(), NonlinearSolver::total_linear_iterations() ??? Kind of like we have get_converged_reason in PetscNonlinearSolver ? For now I think I'm going to add a get_total_linear_iterations() just to PetscNonlinearSolver so I can get my work done…. it will follow the path of get_converged_reason()…. Derek 
From: Roy Stogner <roystgnr@ic...>  20110916 21:14:21

On Fri, 16 Sep 2011, John Peterson wrote: > OK, I admit I never thought the Nemesis stuff would be used with a > SerialMesh and never tested any of it with that. > > Your patch of r4753 opened up this can of worms...maybe we can "fix" > it with 'git revert' ;) Ha! Yeah, if you can't get the same results out of ParallelMesh then put it on the back burner. I didn't expect Nemesis to necessarily work with SerialMesh right away; with that patch I just wanted to remove any obvious obstacles. Although, I suspect any bug we run into on a SerialMesh ought to also be triggerable by serializing a ParallelMesh and then trying to do output before redeleting remote elements. Some of my earliest ParallelMesh debugging was done with the data kept in such a "likeSerialMeshbutslower" mode.  Roy 
From: John Peterson <jwpeterson@gm...>  20110916 21:05:10

On Fri, Sep 16, 2011 at 2:57 PM, Roy Stogner <roystgnr@...> wrote: > > On Fri, 16 Sep 2011, John Peterson wrote: > >> On Fri, Sep 16, 2011 at 2:47 PM, Roy Stogner <roystgnr@...> >> wrote: >>> >>> If I switch ex14's output format from ExodusII to Nemesis, and if I >>> set output_intermediate=true in ex14.in, and if I run on 4 processors, >>> then I get a crash at the first output step. >> >> Parallel mesh on I assume? > > That was the original reason I switched around the output, but the > problem manifests on SerialMesh. OK, I admit I never thought the Nemesis stuff would be used with a SerialMesh and never tested any of it with that. Your patch of r4753 opened up this can of worms...maybe we can "fix" it with 'git revert' ;)  John 
From: Roy Stogner <roystgnr@ic...>  20110916 20:57:32

On Fri, 16 Sep 2011, John Peterson wrote: > On Fri, Sep 16, 2011 at 2:47 PM, Roy Stogner <roystgnr@...> wrote: >> >> If I switch ex14's output format from ExodusII to Nemesis, and if I >> set output_intermediate=true in ex14.in, and if I run on 4 processors, >> then I get a crash at the first output step. > > Parallel mesh on I assume? That was the original reason I switched around the output, but the problem manifests on SerialMesh. > I'll see if I can at least replicate it some time this afternoon. Thanks,  Roy 
From: John Peterson <jwpeterson@gm...>  20110916 20:55:18

On Fri, Sep 16, 2011 at 2:47 PM, Roy Stogner <roystgnr@...> wrote: > > If I switch ex14's output format from ExodusII to Nemesis, and if I > set output_intermediate=true in ex14.in, and if I run on 4 processors, > then I get a crash at the first output step. Parallel mesh on I assume? > METHOD=dbg shows me *where* the outofbounds vector access is, but I > don't know the Nemesis_IO_Helper code well enough to figure out what's > going wrong. Based on the failure conditions I'd assume it's another > codedoesn'tworkwithmoreprocessorsthancoarseelements bug, but > since there are some if cases in the Nemesis which specifically try to > test for such bugs, maybe it's something more complex. > > Anyway, I'm not going to have time to look into it more deeply myself > soon, so I figured I'd throw it out to the list in case anyone more > familiar with the Exodus/Nemesis code wants to see if it's replicable > and/or fixable. I'll see if I can at least replicate it some time this afternoon.  John 
From: Roy Stogner <roystgnr@ic...>  20110916 20:47:14

If I switch ex14's output format from ExodusII to Nemesis, and if I set output_intermediate=true in ex14.in, and if I run on 4 processors, then I get a crash at the first output step. METHOD=dbg shows me *where* the outofbounds vector access is, but I don't know the Nemesis_IO_Helper code well enough to figure out what's going wrong. Based on the failure conditions I'd assume it's another codedoesn'tworkwithmoreprocessorsthancoarseelements bug, but since there are some if cases in the Nemesis which specifically try to test for such bugs, maybe it's something more complex. Anyway, I'm not going to have time to look into it more deeply myself soon, so I figured I'd throw it out to the list in case anyone more familiar with the Exodus/Nemesis code wants to see if it's replicable and/or fixable.  Roy 
From: David Knezevic <dknezevic@se...>  20110915 20:44:26

Hi all, RBEvaluation makes a direct call to write_header and read_header, and this seems to be causing a problem. For example, when I modify ex23 so that it has two variables ("u" and "v") rather than one, I get an error in rb_evaluation.C:1029 when the system tries to read in a header file before reading the subsequent sequence of .xdr files. The stack trace is below; the system is looking for a variable with name "", whereas it should be "u". The header is written in rb_evaluation.C: Xdr header_data(file_name.str(), ENCODE); sys.write_header(header_data, "", false); and also read in rb_evaluation.C with: Xdr header_data(file_name.str(), DECODE); sys.read_header(header_data, "", false); I was wondering if there's anything obviously wrong with this usage of read/write_header? These methods appear to be used in a similar manner in equation_systems.C. In particular, I can't see why the 2variable case triggers an error whereas the 1variable case works fine? Many thanks, Dave  ERROR: variable does not exist in this system! [0] src/systems/system.C, line 1076, compiled Sep 6 2011 at 22:15:39 terminate called after throwing an instance of 'libMesh::LogicError' what(): Error in libMesh internal logic Program received signal SIGABRT, Aborted. 0x00007ffff08ccd05 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 64 ../nptl/sysdeps/unix/sysv/linux/raise.c: No such file or directory. in ../nptl/sysdeps/unix/sysv/linux/raise.c (gdb) bt #0 0x00007ffff08ccd05 in raise (sig=6) at ../nptl/sysdeps/unix/sysv/linux/raise.c:64 #1 0x00007ffff08d0ab6 in abort () at abort.c:92 #2 0x00007ffff0ce86dd in __gnu_cxx::__verbose_terminate_handler() () from /usr/lib/x86_64linuxgnu/libstdc++.so.6 #3 0x00007ffff6e670ef in libMesh::libmesh_terminate_handler () at src/base/libmesh.C:200 #4 0x00007ffff0ce6926 in ?? () from /usr/lib/x86_64linuxgnu/libstdc++.so.6 #5 0x00007ffff0ce6953 in std::terminate() () from /usr/lib/x86_64linuxgnu/libstdc++.so.6 #6 0x00007ffff0ce6a5e in __cxa_throw () from /usr/lib/x86_64linuxgnu/libstdc++.so.6 #7 0x00007ffff76bf96f in libMesh::System::variable_number (this=0x7ea930, var=...) at src/systems/system.C:1076 #8 0x00007ffff76d2d4f in libMesh::System::read_header (this=0x7ea930, io=..., version=..., read_header=false, read_additional_data=true, read_legacy_format=false) at src/systems/system_io.C:232 #9 0x00007ffff7675450 in libMesh::RBEvaluation::read_in_basis_functions (this=0x7fffffffd920, sys=..., directory_name=..., read_binary_basis_functions=true) at src/systems/rb_evaluation.C:1029 #10 0x000000000044c09f in main (argc=1, argv=0x7fffffffe668) at ex23_vector.C:144 
From: David Andrs <David.A<ndrs@in...>  20110915 18:52:28

/* $Id: ex4.C 2501 20071120 02:33:29Z benkirk $ */ /* The Next Great Finite Element Library. */ /* Copyright (C) 2003 Benjamin S. Kirk */ /* This library is free software; you can redistribute it and/or */ /* modify it under the terms of the GNU Lesser General Public */ /* License as published by the Free Software Foundation; either */ /* version 2.1 of the License, or (at your option) any later version. */ /* This library is distributed in the hope that it will be useful, */ /* but WITHOUT ANY WARRANTY; without even the implied warranty of */ /* MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU */ /* Lesser General Public License for more details. */ /* You should have received a copy of the GNU Lesser General Public */ /* License along with this library; if not, write to the Free Software */ /* Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 021111307 USA */ // <h1>Example 19  Solving the 2D Young Laplace Problem using nonlinear solvers</h1> // // This example shows how to use the NonlinearImplicitSystem class // to efficiently solve nonlinear problems in parallel. // // In nonlinear systems, we aim at finding x that satisfy R(x) = 0. // In nonlinear finite element analysis, the residual is typically // of the form R(x) = K(x)*x  f, with K(x) the system matrix and f // the "righthandside". The NonlinearImplicitSystem class expects // two callback functions to compute the residual R and its Jacobian // for the Newton iterations. Here, we just approximate // the true Jacobian by K(x). // // You can turn on preconditining of the matrix free system using the // jacobian by passing "pre" on the command line. Currently this only // work with Petsc so this isn't used by using "make run" // // This example also runs with the experimental Trilinos NOX solvers by specifying // the usetrilinos command line argument. // C++ include files that we need #include <iostream> #include <algorithm> #include <cmath> // Various include files needed for the mesh & solver functionality. #include "libmesh.h" #include "mesh.h" #include "mesh_refinement.h" #include "exodusII_io.h" #include "equation_systems.h" #include "fe.h" #include "quadrature_gauss.h" #include "dof_map.h" #include "sparse_matrix.h" #include "numeric_vector.h" #include "dense_matrix.h" #include "dense_vector.h" #include "elem.h" #include "string_to_enum.h" #include "getpot.h" // The nonlinear solver and system we will be using #include "nonlinear_solver.h" #include "nonlinear_implicit_system.h" #include "trilinos_preconditioner.h" // Necessary for programmatically setting petsc options #ifdef LIBMESH_HAVE_PETSC #include <petsc.h> #endif #ifdef LIBMESH_HAVE_ML #include "ml_MultiLevelPreconditioner.h" #endif // Bring in everything from the libMesh namespace using namespace libMesh; // A reference to our equation system EquationSystems *_equation_system = NULL; // Lets define the physical parameters of the equation const Real kappa = 1.; const Real sigma = 0.2; // This function computes the Jacobian K(x) void compute_jacobian (const NumericVector<Number>& soln, SparseMatrix<Number>& jacobian, NonlinearImplicitSystem& sys) { // Get a reference to the equation system. EquationSystems &es = *_equation_system; // Get a constant reference to the mesh object. const MeshBase& mesh = es.get_mesh(); // The dimension that we are running const unsigned int dim = mesh.mesh_dimension(); // Get a reference to the NonlinearImplicitSystem we are solving NonlinearImplicitSystem& system = es.get_system<NonlinearImplicitSystem>("LaplaceYoung"); // A reference to the \p DofMap object for this system. The \p DofMap // object handles the index translation from node and element numbers // to degree of freedom numbers. We will talk more about the \p DofMap // in future examples. const DofMap& dof_map = system.get_dof_map(); // Get a constant reference to the Finite Element type // for the first (and only) variable in the system. FEType fe_type = dof_map.variable_type(0); // Build a Finite Element object of the specified type. Since the // \p FEBase::build() member dynamically creates memory we will // store the object as an \p AutoPtr<FEBase>. This can be thought // of as a pointer that will clean up after itself. AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); // A 5th order Gauss quadrature rule for numerical integration. QGauss qrule (dim, FIFTH); // Tell the finite element object to use our quadrature rule. fe>attach_quadrature_rule (&qrule); // Here we define some references to cellspecific data that // will be used to assemble the linear system. // We begin with the element Jacobian * quadrature weight at each // integration point. const std::vector<Real>& JxW = fe>get_JxW(); // The element shape functions evaluated at the quadrature points. const std::vector<std::vector<Real> >& phi = fe>get_phi(); // The element shape function gradients evaluated at the quadrature // points. const std::vector<std::vector<RealGradient> >& dphi = fe>get_dphi(); // Define data structures to contain the Jacobian element matrix. // Following basic finite element terminology we will denote these // "Ke". More detail is in example 3. DenseMatrix<Number> Ke; // This vector will hold the degree of freedom indices for // the element. These define where in the global system // the element degrees of freedom get mapped. std::vector<unsigned int> dof_indices; // Now we will loop over all the active elements in the mesh which // are local to this processor. // We will compute the element Jacobian contribution. MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); for ( ; el != end_el; ++el) { // Store a pointer to the element we are currently // working on. This allows for nicer syntax later. const Elem* elem = *el; // Get the degree of freedom indices for the // current element. These define where in the global // matrix and righthandside this element will // contribute to. dof_map.dof_indices (elem, dof_indices); // Compute the elementspecific data for the current // element. This involves computing the location of the // quadrature points (q_point) and the shape functions // (phi, dphi) for the current element. fe>reinit (elem); // Zero the element Jacobian before // summing them. We use the resize member here because // the number of degrees of freedom might have changed from // the last element. Note that this will be the case if the // element type is different (i.e. the last element was a // triangle, now we are on a quadrilateral). Ke.resize (dof_indices.size(), dof_indices.size()); // Now we will build the element Jacobian. This involves // a double loop to integrate the test funcions (i) against // the trial functions (j). Note that the Jacobian depends // on the current solution x, which we access using the soln // vector. // for (unsigned int qp=0; qp<qrule.n_points(); qp++) { Gradient grad_u; for (unsigned int i=0; i<phi.size(); i++) grad_u += dphi[i][qp]*soln(dof_indices[i]); const Number K = 1./std::sqrt(1. + grad_u*grad_u); for (unsigned int i=0; i<phi.size(); i++) for (unsigned int j=0; j<phi.size(); j++) Ke(i,j) += JxW[qp]*( K*(dphi[i][qp]*dphi[j][qp]) + kappa*phi[i][qp]*phi[j][qp] ); } dof_map.constrain_element_matrix (Ke, dof_indices); // Add the element matrix to the system Jacobian. jacobian.add_matrix (Ke, dof_indices); } // That's it. } // Here we compute the residual R(x) = K(x)*x  f. The current solution // x is passed in the soln vector void compute_residual (const NumericVector<Number>& soln, NumericVector<Number>& residual, NonlinearImplicitSystem& sys) { EquationSystems &es = *_equation_system; // Get a constant reference to the mesh object. const MeshBase& mesh = es.get_mesh(); // The dimension that we are running const unsigned int dim = mesh.mesh_dimension(); libmesh_assert (dim == 2); // Get a reference to the NonlinearImplicitSystem we are solving NonlinearImplicitSystem& system = es.get_system<NonlinearImplicitSystem>("LaplaceYoung"); // A reference to the \p DofMap object for this system. The \p DofMap // object handles the index translation from node and element numbers // to degree of freedom numbers. We will talk more about the \p DofMap // in future examples. const DofMap& dof_map = system.get_dof_map(); // Get a constant reference to the Finite Element type // for the first (and only) variable in the system. FEType fe_type = dof_map.variable_type(0); // Build a Finite Element object of the specified type. Since the // \p FEBase::build() member dynamically creates memory we will // store the object as an \p AutoPtr<FEBase>. This can be thought // of as a pointer that will clean up after itself. AutoPtr<FEBase> fe (FEBase::build(dim, fe_type)); // A 5th order Gauss quadrature rule for numerical integration. QGauss qrule (dim, FIFTH); // Tell the finite element object to use our quadrature rule. fe>attach_quadrature_rule (&qrule); // Declare a special finite element object for // boundary integration. AutoPtr<FEBase> fe_face (FEBase::build(dim, fe_type)); // Boundary integration requires one quadraure rule, // with dimensionality one less than the dimensionality // of the element. QGauss qface(dim1, FIFTH); // Tell the finte element object to use our // quadrature rule. fe_face>attach_quadrature_rule (&qface); // Here we define some references to cellspecific data that // will be used to assemble the linear system. // We begin with the element Jacobian * quadrature weight at each // integration point. const std::vector<Real>& JxW = fe>get_JxW(); // The element shape functions evaluated at the quadrature points. const std::vector<std::vector<Real> >& phi = fe>get_phi(); // The element shape function gradients evaluated at the quadrature // points. const std::vector<std::vector<RealGradient> >& dphi = fe>get_dphi(); // Define data structures to contain the resdual contributions DenseVector<Number> Re; // This vector will hold the degree of freedom indices for // the element. These define where in the global system // the element degrees of freedom get mapped. std::vector<unsigned int> dof_indices; // Now we will loop over all the active elements in the mesh which // are local to this processor. // We will compute the element residual. residual.zero(); MeshBase::const_element_iterator el = mesh.active_local_elements_begin(); const MeshBase::const_element_iterator end_el = mesh.active_local_elements_end(); for ( ; el != end_el; ++el) { // Store a pointer to the element we are currently // working on. This allows for nicer syntax later. const Elem* elem = *el; // Get the degree of freedom indices for the // current element. These define where in the global // matrix and righthandside this element will // contribute to. dof_map.dof_indices (elem, dof_indices); // Compute the elementspecific data for the current // element. This involves computing the location of the // quadrature points (q_point) and the shape functions // (phi, dphi) for the current element. fe>reinit (elem); // We use the resize member here because // the number of degrees of freedom might have changed from // the last element. Note that this will be the case if the // element type is different (i.e. the last element was a // triangle, now we are on a quadrilateral). Re.resize (dof_indices.size()); // Now we will build the residual. This involves // the construction of the matrix K and multiplication of it // with the current solution x. We rearrange this into two loops: // In the first, we calculate only the contribution of // K_ij*x_j which is independent of the row i. In the second loops, // we multiply with the rowdependent part and add it to the element // residual. for (unsigned int qp=0; qp<qrule.n_points(); qp++) { Number u = 0; Gradient grad_u; for (unsigned int j=0; j<phi.size(); j++) { u += phi[j][qp]*soln(dof_indices[j]); grad_u += dphi[j][qp]*soln(dof_indices[j]); } const Number K = 1./std::sqrt(1. + grad_u*grad_u); for (unsigned int i=0; i<phi.size(); i++) Re(i) += JxW[qp]*( K*(dphi[i][qp]*grad_u) + kappa*phi[i][qp]*u ); } // At this point the interior element integration has // been completed. However, we have not yet addressed // boundary conditions. // The following loops over the sides of the element. // If the element has no neighbor on a side then that // side MUST live on a boundary of the domain. for (unsigned int side=0; side<elem>n_sides(); side++) if (elem>neighbor(side) == NULL) { // The value of the shape functions at the quadrature // points. const std::vector<std::vector<Real> >& phi_face = fe_face>get_phi(); // The Jacobian * Quadrature Weight at the quadrature // points on the face. const std::vector<Real>& JxW_face = fe_face>get_JxW(); // Compute the shape function values on the element face. fe_face>reinit(elem, side); // Loop over the face quadrature points for integration. for (unsigned int qp=0; qp<qface.n_points(); qp++) { // This is the righthandside contribution (f), // which has to be subtracted from the current residual for (unsigned int i=0; i<phi_face.size(); i++) Re(i) = JxW_face[qp]*sigma*phi_face[i][qp]; } } dof_map.constrain_element_vector (Re, dof_indices); residual.add_vector (Re, dof_indices); } // That's it. } // Begin the main program. int main (int argc, char** argv) { // Initialize libMesh and any dependent libaries, like in example 2. LibMeshInit init (argc, argv); #if !defined(LIBMESH_HAVE_PETSC) && !defined(LIBMESH_HAVE_TRILINOS) if (libMesh::processor_id() == 0) std::cerr << "ERROR: This example requires libMesh to be\n" << "compiled with nonlinear solver support from\n" << "PETSc or Trilinos!" << std::endl; return 0; #endif #ifndef LIBMESH_ENABLE_AMR if (libMesh::processor_id() == 0) std::cerr << "ERROR: This example requires libMesh to be\n" << "compiled with AMR support!" << std::endl; return 0; #else // Create a GetPot object to parse the command line GetPot command_line (argc, argv); // Check for proper calling arguments. if (argc < 3) { if (libMesh::processor_id() == 0) std::cerr << "Usage:\n" <<"\t " << argv[0] << " r 2" << std::endl; // This handy function will print the file name, line number, // and then abort. libmesh_error(); } // Brief message to the user regarding the program name // and command line arguments. else { std::cout << "Running " << argv[0]; for (int i=1; i<argc; i++) std::cout << " " << argv[i]; std::cout << std::endl << std::endl; } // Read number of refinements int nr = 2; if ( command_line.search(1, "r") ) nr = command_line.next(nr); // Read FE order from command line std::string order = "FIRST"; if ( command_line.search(2, "Order", "o") ) order = command_line.next(order); // Read FE Family from command line std::string family = "LAGRANGE"; if ( command_line.search(2, "FEFamily", "f") ) family = command_line.next(family); // Cannot use dicontinuous basis. if ((family == "MONOMIAL")  (family == "XYZ")) { std::cout << "ex19 currently requires a C^0 (or higher) FE basis." << std::endl; libmesh_error(); } if ( command_line.search(1, "pre") ) { #ifdef LIBMESH_HAVE_PETSC //Use the jacobian for preconditioning. PetscOptionsSetValue("snes_mf_operator",PETSC_NULL); #else std::cerr<<"Must be using PetsC to use jacobian based preconditioning"<<std::endl; //returning zero so that "make run" won't fail if we ever enable this capability there. return 0; #endif //LIBMESH_HAVE_PETSC } // Skip this 2D example if libMesh was compiled as 1Donly. libmesh_example_assert(2 <= LIBMESH_DIM, "2D support"); // Create a mesh from file. Mesh mesh; mesh.read ("lshaped.xda"); if (order != "FIRST") mesh.all_second_order(); MeshRefinement(mesh).uniformly_refine(nr); // Print information about the mesh to the screen. mesh.print_info(); // Create an equation systems object. EquationSystems equation_systems (mesh); _equation_system = &equation_systems; // Declare the system and its variables. // Creates a system named "LaplaceYoung" NonlinearImplicitSystem& system = equation_systems.add_system<NonlinearImplicitSystem> ("LaplaceYoung"); // Here we specify the tolerance for the nonlinear solver and // the maximum of nonlinear iterations. equation_systems.parameters.set<Real> ("nonlinear solver tolerance") = 1.e12; equation_systems.parameters.set<unsigned int> ("nonlinear solver maximum iterations") = 50; // Adds the variable "u" to "LaplaceYoung". "u" // will be approximated using secondorder approximation. system.add_variable("u", Utility::string_to_enum<Order> (order), Utility::string_to_enum<FEFamily>(family)); // Give the system a pointer to the functions that update // the residual and Jacobian. system.nonlinear_solver>residual = compute_residual; system.nonlinear_solver>jacobian = compute_jacobian; // Uncomment this for matrixfree assembling // system.nonlinear_solver>jacobian = NULL; // Initialize the data structures for the equation system. equation_systems.init(); // Preconditioning TrilinosPreconditioner<Number> pc; pc.set_matrix(*system.matrix); // pc.set_type(SOR_PRECOND); // pc.set_type(ILU_PRECOND); // Teuchos::ParameterList list; // list.set("fact: leveloffill", 5); // use ILU(5) // pc.set_params(list); pc.set_type(AMG_PRECOND); Teuchos::ParameterList list; // ML_Epetra::SetDefaults("DD", list); // list.set("max levels",6); // list.set("increasing or decreasing","increasing"); // list.set("aggregation: type", "METIS"); // list.set("aggregation: nodes per aggregate", 16); // list.set("smoother: pre or post", "both"); // list.set("coarse: type","AmesosKLU"); // list.set("smoother: type", "Aztec"); ML_Epetra::SetDefaults("SA", list); list.set("max levels",2); list.set("increasing or decreasing","decreasing"); list.set("aggregation: type", "MIS"); list.set("coarse: type","AmesosKLU"); pc.set_params(list); pc.init(); system.nonlinear_solver>attach_preconditioner(&pc); // Prints information about the system to the screen. equation_systems.print_info(); // Solve the system "LaplaceYoung", print the number of iterations // and final residual equation_systems.get_system("LaplaceYoung").solve(); // Print out final convergence information. This duplicates some // output from during the solve itself, but demonstrates another way // to get this information after the solve is complete. std::cout << "LaplaceYoung system solved at nonlinear iteration " << system.n_nonlinear_iterations() << " , final nonlinear residual norm: " << system.final_nonlinear_residual() << std::endl; #ifdef LIBMESH_HAVE_EXODUS_API // After solving the system write the solution ExodusII_IO (mesh).write_equation_systems ("out.e", equation_systems); #endif // #ifdef LIBMESH_HAVE_EXODUS_API #endif // #ifndef LIBMESH_ENABLE_AMR // All done. return 0; } 
From: Roy Stogner <roystgnr@ic...>  20110915 16:34:09

On Thu, 15 Sep 2011, Vasilis Vavourakis wrote: > was just wondering if it would be easy to add in the libMesh library > the feature of doing the domain decomposition for nodes, in addition > to elements as it does right now. Hmm... we currently *do* do domain decomposition for nodes. There's two catches to it: Each node currently gets put in the lowestnumbered domain available among elements containing that node. (I don't recall for certain whether "containing" here is in the geometric or graph sense in the hanging nodes case, but I think graph.) There's some flexibility with this, but: Each node needs to get put in *some* domain that is shared among elements containing that node. If that's not valid then a lot of library code probably breaks. > the reason of my inquiry is that this new feature would be handy in case one implements libMesh for Meshless methods...or any > other techniques (don't have something else in mind) for which partitioning is done for the mesh nodes instead for mesh > elements. That's a lot trickier. It's impossible to come up with an elements partitioning for which no valid nodes partitioning exists, and once you've got the elements partitioning it's easy to create a valid nodes partitioning. However, it is possible to come up with a nodes partitioning for which no valid elements partitioning exists, and even when a valid elements partitioning does exist it's not always trivial to construct. > i guess that this will need to provide a few additional functions to the MeshBase class, like: Node::active() currently doesn't mean much, just "was given a node id". In an AMR mesh, I believe *every* node which is attached to an element is also attached to an active element. So with that in mind: > n_active_local_nodes() Probably just tells you n_local_nodes() > active_local_node_begin() > active_local_node_end() would match local_nodes_begin/end() > active_not_local_elements_begin() > active_not_local_elements_end() already exist.  Roy 
From: David Andrs <David.A<ndrs@in...>  20110915 14:04:52

Roy Stogner <roystgnr@...> wrote on 09/13/2011 09:54:32 AM: > > On Mon, 12 Sep 2011, David Andrs wrote: > > > Attached is a patch that can configure libMesh against 9.0.3 and 10.6.4. > > Works great for me. Unless someone else reports breakage I'd say > commit it. > > > I basically kept the previous m4 macros, just moved them around a > > little bit. Also previsouly there was a macro HAVE_TRILINOS that was > > used for enabling AztecOO. I replaced that with HAVE_AZTECOO since > > Trilinos got this nextgen package with iterative solvers called > > belos, so that our macros are more descriptive. > > Yeah, I noticed that. I hate breaking backwards API compatibility, > but I also hate horribly misnamed APIs, so your change looks like an > improvement to me. >  > Roy > Just checked that in. Also with the configure script update, so it is ready for other people to pick it up and use it.  David Andrs 