You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: David K. <dav...@ak...> - 2018-01-16 16:41:07
|
Hi Roy, Thanks for your comments, that clarifies things a lot! More comments below: On Tue, Jan 16, 2018 at 10:39 AM, Roy Stogner <roy...@ic...> wrote: > > On Tue, 16 Jan 2018, David Knezevic wrote: > > I'm working with Jonas on this topic, and I just wanted to add a few >> - The original issue that Jonas ran into was indeed due to a "constraint >> cycle". >> > > Sorry my email crossed with yours! > > - Our preferred fix is to detect constraint cycles and fix them ourselves, >> or at least to detect them and throw an error. What is the situation with >> error checking for constraint cycles at the moment? >> > > I'm pretty sure there is none. > > This seems dangerous to me since libMesh can currently silently give >> the wrong answer. >> > > Agreed. > > - If there is no error checking for constraint cycles, then I'd propose to >> add some optional error checking to DofMap >> > > Certainly! Except that I'd make it mandatory in dbg+devel mode. > > (and make sure this can be active in opt as well as dbg mode). >> > > If that can be done sensibly, I suppose. However, a dbg+devel-mode-only > check would be *easy* - dig into the function where we expand > recursive constraints, and add a quick libmesh_assert() that each of > the constrainer DoF indices doesn't equal the constrained DoF index at > each step at the expansion. Doing that check in opt mode would be too > expensive, even optionally, IMHO, since you'd be testing your > do-we-do-testing flag a zillion times. > > If you make a separate function to test constraints for recursion > before expanding them out, and only call that function after testing > an optional flag, that would be fine in opt mode, but more repetitive > to code. OK, I'll look into make a PR with this implementation. > I guess this is as simple as a function >> that is called after process_constraints that checks that all entries in >> constrain rows are unconstrained. >> > > Hmm... because *if* the recursion terminated properly, then we'd > expect that to be true after the expansion code was done? Yes, that was the idea. > That might work. I wouldn't trust it, though. You're no longer proposing > testing whether the constraints are expected to break after > processing, you're proposing testing whether the processing looks > broken in a specific way. It'll work for now, but then it may just > fail to trigger after the next time we refactor process_constraints(). > Yes, good point. Better to avoid this approach if possible. I think your suggestion above is better so I'll start with that. > I suppose either way you'll want to add a unit test or two that catch > cyclic constraints, and if we've got test coverage then I'm less > worried about breaking things. So okay, this would be reasonable too. > Agreed. Thanks, David |
From: Roy S. <roy...@ic...> - 2018-01-16 15:39:13
|
On Tue, 16 Jan 2018, David Knezevic wrote: > I'm working with Jonas on this topic, and I just wanted to add a few > - The original issue that Jonas ran into was indeed due to a "constraint > cycle". Sorry my email crossed with yours! > - Our preferred fix is to detect constraint cycles and fix them ourselves, > or at least to detect them and throw an error. What is the situation with > error checking for constraint cycles at the moment? I'm pretty sure there is none. > This seems dangerous to me since libMesh can currently silently give > the wrong answer. Agreed. > - If there is no error checking for constraint cycles, then I'd propose to > add some optional error checking to DofMap Certainly! Except that I'd make it mandatory in dbg+devel mode. > (and make sure this can be active in opt as well as dbg mode). If that can be done sensibly, I suppose. However, a dbg+devel-mode-only check would be *easy* - dig into the function where we expand recursive constraints, and add a quick libmesh_assert() that each of the constrainer DoF indices doesn't equal the constrained DoF index at each step at the expansion. Doing that check in opt mode would be too expensive, even optionally, IMHO, since you'd be testing your do-we-do-testing flag a zillion times. If you make a separate function to test constraints for recursion before expanding them out, and only call that function after testing an optional flag, that would be fine in opt mode, but more repetitive to code. > I guess this is as simple as a function > that is called after process_constraints that checks that all entries in > constrain rows are unconstrained. Hmm... because *if* the recursion terminated properly, then we'd expect that to be true after the expansion code was done? That might work. I wouldn't trust it, though. You're no longer proposing testing whether the constraints are expected to break after processing, you're proposing testing whether the processing looks broken in a specific way. It'll work for now, but then it may just fail to trigger after the next time we refactor process_constraints(). I suppose either way you'll want to add a unit test or two that catch cyclic constraints, and if we've got test coverage then I'm less worried about breaking things. So okay, this would be reasonable too. --- Roy |
From: Roy S. <roy...@ic...> - 2018-01-16 15:28:30
|
On Sun, 14 Jan 2018, Jonas Ballani wrote: > I have a question regarding constrained dofs that are themselves constrained by other dofs within the same element. Apparently, this leads to a non-symmetric constrained element matrix which I illustrate here by a small example: > > I consider an element with dof indices = (0,1,2,3). I impose the following contraint rows > > { > constrained_dof_index = 0; > constraint_row[1] = 1.0; > constraint_row[2] = -1.0; > } > { > constrained_dof_index = 1; > constraint_row[0] = 1.0; > constraint_row[3] = -1.0; > } So x0 = x1 - x2 is the first constraint, and x1 = x0 - x3 is the second constraint? You're constraining DoF 0 in terms of DoF 1, *and* you're constraining DoF 1 in terms of DoF 0? I'm afraid we don't support constraint equations with cycles. You can have a sequence in which you constrain a DoF with respect to a second DoF which is itself constrained with respect to a third DoF and so on indefinitely, *except* that you can never end up at a constraint which was already referenced earlier in that sequence. I don't think that limitation would be easy to change, either. It would turn DofMap::enforce_constraints_exactly() from a simple sweep into an outright linear solve. --- Roy |
From: David K. <dav...@ak...> - 2018-01-16 14:00:18
|
Hi all, I'm working with Jonas on this topic, and I just wanted to add a few comments: - The original issue that Jonas ran into was indeed due to a "constraint cycle". - Our preferred fix is to detect constraint cycles and fix them ourselves, or at least to detect them and throw an error. What is the situation with error checking for constraint cycles at the moment? I made a test case with a constraint cycle and it doesn't hit an assertion failure or error in either dbg or opt mode, so as far as I have seen there seems to be no check for this? This seems dangerous to me since libMesh can currently silently give the wrong answer. - If there is no error checking for constraint cycles, then I'd propose to add some optional error checking to DofMap (and make sure this can be active in opt as well as dbg mode). I guess this is as simple as a function that is called after process_constraints that checks that all entries in constrain rows are unconstrained. Thanks, David On Mon, Jan 15, 2018 at 10:05 AM, Jonas Ballani <jon...@ep...> wrote: > Dear all, > > What I found out so far is that the problem in my example is related to > having a cycle in the constraints. If there are no cycles in the > constraints, all constraints can be resolved before they are applied so > that each constrained_dof_index (which is a row index) never appears as a > constraining dof (which is a column index). In this case, the matrix C has > a zero column at the constrained_dof_index, so that C^T*K*C has zero rows > and columns for all constrained_dof_indices. This means that inserting > unit rows for the constrained_dof_indices afterwards produces a symmetric > constraint matrix. In case of a cycle, the columns of C at the > constrained_dof_indices can also be non-zero as in my example so that the > final result is non-symmetric. > > I ran into that issue in opt mode, so I wonder if there is any error > checking in place that looks for cycles in the constraints? > > Best, > Jonas > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Jonas B. <jon...@ep...> - 2018-01-15 15:05:27
|
Dear all, What I found out so far is that the problem in my example is related to having a cycle in the constraints. If there are no cycles in the constraints, all constraints can be resolved before they are applied so that each constrained_dof_index (which is a row index) never appears as a constraining dof (which is a column index). In this case, the matrix C has a zero column at the constrained_dof_index, so that C^T*K*C has zero rows and columns for all constrained_dof_indices. This means that inserting unit rows for the constrained_dof_indices afterwards produces a symmetric constraint matrix. In case of a cycle, the columns of C at the constrained_dof_indices can also be non-zero as in my example so that the final result is non-symmetric. I ran into that issue in opt mode, so I wonder if there is any error checking in place that looks for cycles in the constraints? Best, Jonas |
From: Jonas B. <jon...@ep...> - 2018-01-14 18:29:08
|
Dear all, I have a question regarding constrained dofs that are themselves constrained by other dofs within the same element. Apparently, this leads to a non-symmetric constrained element matrix which I illustrate here by a small example: I consider an element with dof indices = (0,1,2,3). I impose the following contraint rows { constrained_dof_index = 0; constraint_row[1] = 1.0; constraint_row[2] = -1.0; } { constrained_dof_index = 1; constraint_row[0] = 1.0; constraint_row[3] = -1.0; } using system.get_dof_map().add_constraint_row( constrained_dof_index, constraint_row, 0., true); Since dof 1 is also constrained, these constraints get preprocessed so that dof_map.print_dof_constraints(); yields Constraints for DoF 0: (0,1) (2,-1) (3,-1) rhs: 0 Constraints for DoF 1: (2,-1) (3,-2) rhs: 0 This means that the following constraint matrix is generated: C = 1 0 -1 -1 0 0 -1 -2 0 0 1 0 0 0 0 1 Assume now that the (symmetric) element matrix K is given by K = 1 2 3 4 2 3 4 5 3 4 5 6 4 5 6 7 Then C^T*K*C = 1 0 0 -1 0 0 0 0 0 0 -1 -1 -1 0 -1 0 This means that dof_map.constrain_element_matrix(K, dof_indices, /*asymmetric_constraint_rows*/false); generates the (non-symmetric) constrained matrix K_constrained = 1 0 0 0 0 1 0 0 0 0 -1 -1 -1 0 -1 0 because the first two rows are replaced by unit rows. My question would be if this cannot be avoided so that I also have to use a non-symmetric solver in this case. Best, Jonas |
From: Roy S. <roy...@ic...> - 2018-01-12 22:37:19
|
Copying this to libmesh-users; it looks like an issue that could affect anyone trying to reuse Elem objects. On Fri, 12 Jan 2018, Jiang, Wen wrote: > Thanks. I think your explanation makes a lot sense to me. I followed > your suggestions that manually call clear_old_dof_object() for the > reused element, but I still hit the same error. Hmmm... are you clearing (or never setting) refinement_flag() too? > Is there any method that we should call to let libmesh to identify > those elements and skip them for projection? clear_old_dof_object() plus set_refinement_flag(DO_NOTHING) should do it! The old_dof_object pointer and the refinement_flag are the only things that the relevant if statement branches on. If that combination doesn't work, then we need to figure out what else is happening. The next step would be to run in gdb, "catch throw" or "b MPI_Abort" or whatever you usually do to catch assertions, then step up the stack to BuildProjectionList::operator() and "p elem->old_dof_object", "p elem->refinement_flag() to see what exactly is going on. Wait, wait, wait. I have another theory about what might be going on. When we hit DofMap::reinit(), that's where the library automatically *creates* old_dof_object, for any object with current dof indexing that needs to be remembered later as old dof indexing, even if the "current dof indexing" is as trivial as "this object has variables for this system but doesn't have any dofs for each variable". So maybe you're hitting that code, it sees the existing DofObject::n_vars(sys) is true, and so it creates old_dof_object! In that case what you'd want to run is clear_dofs() - you might not even need clear_old_dof_object(), since this second theory of mine suggests that you're doing your XFEM stuff at a stage where old_dof_object either has already been automatically cleared or soon will be. Hmm... there's also a chance you'll need to set_n_systems(your_number_of_systems) afterwards to avoid a different assertion. I don't *think* that will happen, I think n_sys gets kept up to date automatically in EquationSystems::reinit (if only because otherwise I think even the old XFEM code would be breaking), but it's a possibility. Let me know what works or doesn't? Also, if clear_dofs() does turn out to be the solution, you might want to add a libMesh unit test that asserts it can be called publicly and has the desired effect of making has_dofs(sys) return false afterwards. If I'd been designing the clear_dofs() API, for modularity's sake I'd have made it private and accessable only via a DofMap friend declaration in DofObject... and if I ever go on a crazy refactoring-for-modularity bender someday I'd hate to break your code because I forgot about your use case! --- Roy > Regards, > Wen > > > Assertion `node->old_dof_object' failed. > > Stack frames: 19 > 0: 0 libmesh_dbg.0.dylib 0x0000000103cf91cf libMesh::print_trace(std::__1::basic_ostream<char, std::__1::char_traits<char> >&) + > 2287 > 1: 1 libmesh_dbg.0.dylib 0x0000000103ce7fea libMesh::MacroFunctions::report_error(char const*, int, char const*, char const*) + > 634 > 2: 2 libmesh_dbg.0.dylib 0x0000000103b49d7a libMesh::DofMap::old_dof_indices(libMesh::Elem const*, std::__1::vector<unsigned > int, std::__1::allocator<unsigned int> >&, unsigned int) const + 10218 > 3: 3 libmesh_dbg.0.dylib 0x00000001054dd3de > libMesh::BuildProjectionList::operator()(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> const&) + 5726 > 4: 4 libmesh_dbg.0.dylib 0x00000001054d9229 void > libMesh::Threads::parallel_reduce<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*>, > libMesh::BuildProjectionList>(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> const&, > libMesh::BuildProjectionList&) + 153 > 5: 5 libmesh_dbg.0.dylib 0x00000001054d63ad libMesh::System::project_vector(libMesh::NumericVector<double> const&, > libMesh::NumericVector<double>&, int) const + 3149 > 6: 6 libmesh_dbg.0.dylib 0x00000001054d557f libMesh::System::project_vector(libMesh::NumericVector<double>&, int) const + 159 > 7: 7 libmesh_dbg.0.dylib 0x0000000105414bbd libMesh::System::restrict_vectors() + 1085 > 8: 8 libmesh_dbg.0.dylib 0x0000000105415b19 libMesh::System::prolong_vectors() + 25 > 9: 9 libmesh_dbg.0.dylib 0x0000000105357a9c libMesh::EquationSystems::reinit() + 9948 > 10: 10 libmoose-dbg.0.dylib 0x00000001017e8703 FEProblemBase::meshChanged() + 211 > 11: 11 libmoose-dbg.0.dylib 0x00000001017e85b1 FEProblemBase::updateMeshXFEM() + 161 > 12: 12 libmoose-dbg.0.dylib 0x0000000101e45216 Transient::solveStep(double) + 1238 > 13: 13 libmoose-dbg.0.dylib 0x0000000101e44cfd Transient::takeStep(double) + 253 > 14: 14 libmoose-dbg.0.dylib 0x0000000101e449a3 Transient::execute() + 211 > 15: 15 libmoose-dbg.0.dylib 0x0000000101bac773 MooseApp::executeExecutioner() + 179 > 16: 16 libmoose-dbg.0.dylib 0x0000000101baeeda MooseApp::run() + 282 > 17: 17 xfem-dbg 0x0000000100002343 main + 307 > 18: 18 libdyld.dylib 0x00007fff9139d235 start + 1 > > On Fri, Jan 12, 2018 at 8:59 AM, Roy Stogner <roy...@ic...> wrote: > > On Thu, 11 Jan 2018, Jiang, Wen wrote: > > This is Wen. I have been working with Ben at INL on the XFEM > development. Recently, I made some changes to the XFEM codes and I > got a libmesh assertion error, shown below, > > The change I made is minor. Originally, we create new child element > from parent element and then delete parent element. Now, I just > changed some nodes with the parent element and use it without > deleting it. I am not sure why it will cause this issue. Could you > give me some suggestions? > > > Assertion `node->old_dof_object' failed. > > > That stack trace is in the first step of a mesh-to-mesh projection, > where we figure out what data needs to be set from processor to > processor: if new element A is owned by processor 1 and its solution > is going to be a restriction or prolongation involving element B which > used to be owned by processor 2, then processor 2 needs to send some > DoF coefficients for element B to processor 1. So we're looping over > all new elements and checking for the old data which is going to be > needed. > > At nodes on C0 elements, restriction and prolongation are easy: > they're just copies. If for some variable V, node A used to have DoF > index 100 and now has DoF index 110, then the processor which used to > own node A needs to send DoF coefficient 100 to the processor which > now owns node A, so that it can be used to set coefficient 110. > > So, we look at the Node parent class DofObject to figure out the new > indexing, and at the DofObject::old_dof_object to figure out the old > indexing, and if the old_dof_object doesn't exist then we have no idea > what to do. > > We hit and fixed this problem already with new element creation, I > believe... looks like that's at system_projection.C lines 1116-1132, > in the same method that you're seeing fail now? > > I think I understand. > > What *used* to happen is that you created elements fresh, so when the > projection code hit them it saw that they had no old_dof_object and no > JUST_REFINED or JUST_COARSENED flag, so it recognized them as new > elements and didn't bother trying to do solution projection on them. > > What *now* happens is that you're reusing an old Elem object (which is > a decent idea for performance reasons alone, don't get me wrong!) but > you're assigning brand new Node objects to it? So when the projection > code looks at this Elem it sees the old_dof_object on it, it doesn't > consider the Elem to be "new", it tries to look up old DoF indices, it > can't *find* any old_dof_object on the new Nodes, and it screams and > dies. > > So what the fix should be depends on what you want to happen with the > solution on this object, I guess. If this is a "shrinking" element > that should keep its old solution values then you'd want to try to > reuse its old Node objects too; if it's (conceptually) a "new" element > on which libMesh should leave new DoF values unset then you'd want to > manually clear_old_dof_object() on it so libMesh doesn't mistake it > for an old element. > > Hope this helps! Let me know if my guesses turn out to be wrong or if > you have more questions. > --- > Roy > > > Stack frames: 20 > 0: 0 libmesh_dbg.0.dylib 0x0000000103ced1cf libMesh::print_trace(std::__1::basic_ostream<char, > std::__1::char_traits<char> >&) + > 2287 > 1: 1 libmesh_dbg.0.dylib 0x0000000103cdbfea libMesh::MacroFunctions::report_error(char const*, int, char > const*, char const*) + > 634 > 2: 2 libmesh_dbg.0.dylib 0x0000000103b3dd7a libMesh::DofMap::old_dof_indices(libMesh::Elem const*, > std::__1::vector<unsigned > int, std::__1::allocator<unsigned int> >&, unsigned int) const + 10218 > 3: 3 libmesh_dbg.0.dylib 0x00000001054d13de > libMesh::BuildProjectionList::operator()(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem > const*> const&) + 5726 > 4: 4 libmesh_dbg.0.dylib 0x00000001054cd229 void > libMesh::Threads::parallel_reduce<libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*>, > libMesh::BuildProjectionList>(libMesh::StoredRange<libMesh::MeshBase::const_element_iterator, libMesh::Elem const*> > const&, > libMesh::BuildProjectionList&) + 153 > 5: 5 libmesh_dbg.0.dylib 0x00000001054ca3ad > libMesh::System::project_vector(libMesh::NumericVector<double> const&, > libMesh::NumericVector<double>&, int) const + 3149 > > > > |
From: David K. <dav...@ak...> - 2018-01-10 20:51:33
|
On Wed, Jan 10, 2018 at 3:41 PM, Roy Stogner <roy...@ic...> wrote: > > On Wed, 10 Jan 2018, David Knezevic wrote: > > I'd like to use unique_ids to identify elements and nodes without worrying >> about renumbering, but before I do that I'd like to understand a bit more >> about how unique_ids are defined. >> >> In particular, what is the idea behind how we generate the unique_ids? >> > > Basically we have two constraints: > > 1. We want to never ever reuse a unique_id for two different > DofObjects of the same category (Nodes, Elems). > 2. We want to assign a unique_id to each DofObject as soon as possible. > > And that leads us to a few complications: > > 1. Our unique_id values will often be sparse: When adaptively > coarsening we delete objects, and we *don't* give their replacements > the same unique_id if/when we adaptively refine the same area again. > When building distributed meshes, we add objects on one processor > without communicating with other processors, so each processor "owns" > an interleaved subset of unique_id space, and may leave some of that > space blank. > 2. Our unique_id values are really only well-defined *when* they're > created, and that creation may depend on mesh type and partitioning. > So: > > One concrete question I have is if I create two Mesh objects by reading in >> the same mesh exodus file twice, will those two meshes have matching >> unique_ids? I guess they will simply because "add_node" and "add_elem" >> will >> be called in the same order during those two read operations. >> > > If both reads are using the same libMesh version, and either both are > to a ReplicatedMesh or both are to a DistributedMesh, then currently > the answer is yes. I don't expect that to change, though I haven't > really thought about it before. > > Please don't count on unique_id numbering remaining the same from > version to version, though! If you need to open a mesh next year with > the same unique_id numbering it has this year, save that mesh in XDR > or XDA (where we write out unique_id), not Exodus. Thanks for the explanations, very helpful! David |
From: Roy S. <roy...@ic...> - 2018-01-10 20:41:46
|
On Wed, 10 Jan 2018, David Knezevic wrote: > I'd like to use unique_ids to identify elements and nodes without worrying > about renumbering, but before I do that I'd like to understand a bit more > about how unique_ids are defined. > > In particular, what is the idea behind how we generate the unique_ids? Basically we have two constraints: 1. We want to never ever reuse a unique_id for two different DofObjects of the same category (Nodes, Elems). 2. We want to assign a unique_id to each DofObject as soon as possible. And that leads us to a few complications: 1. Our unique_id values will often be sparse: When adaptively coarsening we delete objects, and we *don't* give their replacements the same unique_id if/when we adaptively refine the same area again. When building distributed meshes, we add objects on one processor without communicating with other processors, so each processor "owns" an interleaved subset of unique_id space, and may leave some of that space blank. 2. Our unique_id values are really only well-defined *when* they're created, and that creation may depend on mesh type and partitioning. So: > One concrete question I have is if I create two Mesh objects by reading in > the same mesh exodus file twice, will those two meshes have matching > unique_ids? I guess they will simply because "add_node" and "add_elem" will > be called in the same order during those two read operations. If both reads are using the same libMesh version, and either both are to a ReplicatedMesh or both are to a DistributedMesh, then currently the answer is yes. I don't expect that to change, though I haven't really thought about it before. Please don't count on unique_id numbering remaining the same from version to version, though! If you need to open a mesh next year with the same unique_id numbering it has this year, save that mesh in XDR or XDA (where we write out unique_id), not Exodus. --- Roy |
From: David K. <dav...@ak...> - 2018-01-10 20:10:07
|
On Wed, Jan 10, 2018 at 3:07 PM, John Peterson <jwp...@gm...> wrote: > > > On Wed, Jan 10, 2018 at 12:38 PM, David Knezevic < > dav...@ak...> wrote: > >> I'd like to use unique_ids to identify elements and nodes without worrying >> about renumbering, but before I do that I'd like to understand a bit more >> about how unique_ids are defined. >> >> In particular, what is the idea behind how we generate the unique_ids? >> From >> the code in replicated_mesh.C, for example, it looks like the unique_ids >> are set in add_node and add_elem, with _next_unique_id being incremented >> each time a new ID is set. This looks simple enough, but I was wondering >> if >> there's anything else to it? >> > > That's pretty much all there is to it, at least as far as I know. > > > One concrete question I have is if I create two Mesh objects by reading in >> the same mesh exodus file twice, will those two meshes have matching >> unique_ids? I guess they will simply because "add_node" and "add_elem" >> will >> be called in the same order during those two read operations. >> > > Yes, the unique ids are only unique per mesh, as the _next_unique_id is > owned by MeshBase. > > I can't see why two meshes created in exactly the same way wouldn't have > the exact same unique ids for all nodes and elems. > OK, thanks! David |
From: John P. <jwp...@gm...> - 2018-01-10 20:08:20
|
On Wed, Jan 10, 2018 at 12:38 PM, David Knezevic <dav...@ak... > wrote: > I'd like to use unique_ids to identify elements and nodes without worrying > about renumbering, but before I do that I'd like to understand a bit more > about how unique_ids are defined. > > In particular, what is the idea behind how we generate the unique_ids? From > the code in replicated_mesh.C, for example, it looks like the unique_ids > are set in add_node and add_elem, with _next_unique_id being incremented > each time a new ID is set. This looks simple enough, but I was wondering if > there's anything else to it? > That's pretty much all there is to it, at least as far as I know. One concrete question I have is if I create two Mesh objects by reading in > the same mesh exodus file twice, will those two meshes have matching > unique_ids? I guess they will simply because "add_node" and "add_elem" will > be called in the same order during those two read operations. > Yes, the unique ids are only unique per mesh, as the _next_unique_id is owned by MeshBase. I can't see why two meshes created in exactly the same way wouldn't have the exact same unique ids for all nodes and elems. -- John |
From: David K. <dav...@ak...> - 2018-01-10 19:39:09
|
I'd like to use unique_ids to identify elements and nodes without worrying about renumbering, but before I do that I'd like to understand a bit more about how unique_ids are defined. In particular, what is the idea behind how we generate the unique_ids? From the code in replicated_mesh.C, for example, it looks like the unique_ids are set in add_node and add_elem, with _next_unique_id being incremented each time a new ID is set. This looks simple enough, but I was wondering if there's anything else to it? One concrete question I have is if I create two Mesh objects by reading in the same mesh exodus file twice, will those two meshes have matching unique_ids? I guess they will simply because "add_node" and "add_elem" will be called in the same order during those two read operations. Thanks! David |
From: Renato P. <re...@gm...> - 2018-01-08 22:17:36
|
> You could get the dof coefficients corresponding to a du/dx (get dphi > for each shape function at the point where you want a gradient > evaluation, take the x component of each) and insert them into a > constraint row, but this is not likely to be the most accurate way to > solve your problem. Not sure what you're solving (contact? porous > media flow?) Porous media flow (pressure and displacements) > but there's probably a formulation out there somewhere > which enforces a pressure-strain relationship weakly rather than with > point-by-point constraint equations. I guess I found something in the boundary integrals that come out of the green theorem. You are right - I was certainly in the wrong direction. I believe I'm getting back to track now... Thanks! Renato |
From: Renato P. <re...@gm...> - 2018-01-08 22:14:22
|
> Are you using the homogeneous or the heterogeneous constraint > functions when constraining your system of equations? Often we solve > for a delta between two solutions (Newton steps, time steps with fixed > BC values, etc) rather than directly solving for the constrained > solution, so we have different methods to handle constraints in the > former vs latter case. If you've written your own assembly loop, > check it to make sure you're using the appropriate method? You are right. I was not using the heterogeneous. Now it works. Thanks once again for the excellent support! Renato |
From: Roy S. <roy...@ic...> - 2018-01-08 17:54:34
|
On Sat, 6 Jan 2018, Renato Poli wrote: > Hi again, I am trying to close this issue here, but got into trouble again. > > Is there a common way to tie a dof against the derivative of other dofs. Not really. For one thing, there's no such thing as a spatial derivative of a dof. You can get the spatial derivative of a variable at *some* places in a typical finite element solution, but not everywhere, and in particular not at the vertices (where your lowest-order C0 dofs will be located!). > My primary variables are 'pressure' and 'displacement' and I need to > tie pressure to stress (C du/dx)? > I can imagine using more dofs in the "constraint_row" (the element > ones, which are summed together to get the derivative), but I am > afraid of getting an unstable system. You could get the dof coefficients corresponding to a du/dx (get dphi for each shape function at the point where you want a gradient evaluation, take the x component of each) and insert them into a constraint row, but this is not likely to be the most accurate way to solve your problem. Not sure what you're solving (contact? porous media flow?) but there's probably a formulation out there somewhere which enforces a pressure-strain relationship weakly rather than with point-by-point constraint equations. --- Roy |
From: Roy S. <roy...@ic...> - 2018-01-08 16:28:28
|
On Sun, 7 Jan 2018, Renato Poli wrote: > I am struggling with a Dirichlet boundary condition setting the > boundary to zero, instead of the value I impose (3.e7). Previously, I > was using the penalty method, and it was working fine. > > Is there any known corner condition where the DirichletBoundary would > drive the variable to zero? Any idea? Are you using the homogeneous or the heterogeneous constraint functions when constraining your system of equations? Often we solve for a delta between two solutions (Newton steps, time steps with fixed BC values, etc) rather than directly solving for the constrained solution, so we have different methods to handle constraints in the former vs latter case. If you've written your own assembly loop, check it to make sure you're using the appropriate method? --- Roy |
From: Renato P. <re...@gm...> - 2018-01-07 17:16:13
|
Hi I am struggling with a Dirichlet boundary condition setting the boundary to zero, instead of the value I impose (3.e7). Previously, I was using the penalty method, and it was working fine. Is there any known corner condition where the DirichletBoundary would drive the variable to zero? Any idea? This is the code: std::set<boundary_id_type> boundary_ids; std::vector<unsigned int> variables; boundary_ids.insert(BOUNDARY_RESERVOIR); variables.push_back(_pvar); DirichletBoundary dirichlet_bc(boundary_ids, variables, ConstFunction<>(3.e7)); _geomec_sys.get_dof_map().add_dirichlet_boundary(dirichlet_bc); Note: I read some posts on subdomain issues with BCs. My elements are in "subdomain=2", but as of this test, I removed all other subdomains, without success. Should this matter at all? Thanks Renato |
From: Roy S. <roy...@ic...> - 2018-01-05 17:22:34
|
On Fri, 5 Jan 2018, Renato Poli wrote: > I need to tie three dofs together. (u1 + C u2 + D u3=0). I read about the > constraint matrix in the list, but I did not find an example to follow. The > penalty method seems also a nice alternative. > > Is there any example of either to suggest? In systems_of_equations_ex2.C there's an example of using a penalty term to pin pressure at one point; that's probably the closest analogy we have to what you want to do. Using DofMap::add_constraint_row() would be better (you'd be solving the correct equations!) but I don't think we have any use of it in our example codes, and it's slightly trickier: you need to ensure that the DoF you constrain isn't already constrained by e.g. hanging node constraints or boundary condition constraints. --- Roy |
From: John P. <jwp...@gm...> - 2018-01-05 17:08:53
|
On Fri, Jan 5, 2018 at 10:02 AM, Viviana Palacio Betancur < vpa...@uc...> wrote: > No, I have the same variable for the entire mesh. I use the subdomains in > order to impose specific BCs and calculate some properties for the surface > only. > OK, the normal way this would be handled would be to loop over the boundary sides of the volume elements, not to actually have lower-dimensional elements present in the mesh, although I don't see a particular problem with having said elements. -- John |
From: Viviana P. B. <vpa...@uc...> - 2018-01-05 17:03:36
|
No, I have the same variable for the entire mesh. I use the subdomains in order to impose specific BCs and calculate some properties for the surface only. Viviana Palacio Betancur PhD Student | 2016 Cohort de Pablo Group Institute for Molecular Engineering University of Chicago On Fri, Jan 5, 2018 at 10:52 AM, John Peterson <jwp...@gm...> wrote: > > > On Fri, Jan 5, 2018 at 8:39 AM, Viviana Palacio Betancur < > vpa...@uc...> wrote: > >> Hi John, >> >> Thank you for the clarification, it's really helpful. >> >> My assembly function is separated, one for the matrix and another one for >> the rhs. I loop over the elements of each subdomain to avoid the if >> statement. >> > > OK... and are you using subdomain restricted variables? In other words, > are you using a variable that only has DOFs on the boundary subdomain > (system.add_variable(name, order, family, std::set<subdomain_id>))? > > -- > John > |
From: Renato P. <re...@gm...> - 2018-01-05 17:03:18
|
Hi I need to tie three dofs together. (u1 + C u2 + D u3=0). I read about the constraint matrix in the list, but I did not find an example to follow. The penalty method seems also a nice alternative. Is there any example of either to suggest? Thanks once again in advance. Renato |
From: John P. <jwp...@gm...> - 2018-01-05 16:53:12
|
On Fri, Jan 5, 2018 at 8:39 AM, Viviana Palacio Betancur < vpa...@uc...> wrote: > Hi John, > > Thank you for the clarification, it's really helpful. > > My assembly function is separated, one for the matrix and another one for > the rhs. I loop over the elements of each subdomain to avoid the if > statement. > OK... and are you using subdomain restricted variables? In other words, are you using a variable that only has DOFs on the boundary subdomain (system.add_variable(name, order, family, std::set<subdomain_id>))? -- John |
From: Viviana P. B. <vpa...@uc...> - 2018-01-05 15:39:42
|
Hi John, Thank you for the clarification, it's really helpful. My assembly function is separated, one for the matrix and another one for the rhs. I loop over the elements of each subdomain to avoid the if statement. Best, Viviana. Viviana Palacio Betancur PhD Student | 2016 Cohort de Pablo Group Institute for Molecular Engineering University of Chicago On Thu, Jan 4, 2018 at 12:10 PM, John Peterson <jwp...@gm...> wrote: > > > On Thu, Jan 4, 2018 at 9:16 AM, Viviana Palacio Betancur < > vpa...@uc...> wrote: > >> Hello, >> >> I am solving a system on a mesh with two subdomains: bulk with elements >> TET10, and surface with elements TRI6. So far, I've sucessfully asembled >> the system's matrix and rhs, and initialized the system. >> >> My problem is when I execute system.solve(). Apparently, the solution is >> only done for the bulk subdomain and thus the surface solution is almost >> 0. >> Is the default setting to solve only for the first subdomain? Should I be >> using the system.restrict_solve_to? >> > > > All variables (even subdomain restricted variables, if you are using > those) in the System are solved for simultaneously. > > Do you have an if-statement in your assembly function that assembles > different PDE terms depending on what subdomain you are currently on? How > is your boundary subdomain actually coupled to the volume? > > -- > John > |
From: John P. <jwp...@gm...> - 2018-01-04 18:10:52
|
On Thu, Jan 4, 2018 at 9:16 AM, Viviana Palacio Betancur < vpa...@uc...> wrote: > Hello, > > I am solving a system on a mesh with two subdomains: bulk with elements > TET10, and surface with elements TRI6. So far, I've sucessfully asembled > the system's matrix and rhs, and initialized the system. > > My problem is when I execute system.solve(). Apparently, the solution is > only done for the bulk subdomain and thus the surface solution is almost 0. > Is the default setting to solve only for the first subdomain? Should I be > using the system.restrict_solve_to? > All variables (even subdomain restricted variables, if you are using those) in the System are solved for simultaneously. Do you have an if-statement in your assembly function that assembles different PDE terms depending on what subdomain you are currently on? How is your boundary subdomain actually coupled to the volume? -- John |
From: Viviana P. B. <vpa...@uc...> - 2018-01-04 16:17:20
|
Hello, I am solving a system on a mesh with two subdomains: bulk with elements TET10, and surface with elements TRI6. So far, I've sucessfully asembled the system's matrix and rhs, and initialized the system. My problem is when I execute system.solve(). Apparently, the solution is only done for the bulk subdomain and thus the surface solution is almost 0. Is the default setting to solve only for the first subdomain? Should I be using the system.restrict_solve_to? Thank you. Best, Viviana. Viviana Palacio Betancur PhD Student | 2016 Cohort de Pablo Group Institute for Molecular Engineering University of Chicago |