You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: John P. <jwp...@gm...> - 2019-02-22 20:27:06
|
On Fri, Feb 22, 2019 at 2:13 PM Renato Poli <re...@gm...> wrote: > The good news is that, if I "reinit" the equation systems in the first > timestep only, it works beautifully. > However, as I rely on this call to change the boundary conditions, that > means I am not able to change the BCs during the run. > > If it is a bug, it looks like workaroundable? > Perhaps forcing reinitialization of some stuff? > What do you think? > If it's a bug, I think we should try to fix it. For that it will be helpful to have a debug-mode stack trace or even better a minimum working example that demonstrates the error. It's possible we are forgetting to close some vector after constraints are applied, I really don't know... -- John |
From: Renato P. <re...@gm...> - 2019-02-22 20:13:36
|
The good news is that, if I "reinit" the equation systems in the first timestep only, it works beautifully. However, as I rely on this call to change the boundary conditions, that means I am not able to change the BCs during the run. If it is a bug, it looks like workaroundable? Perhaps forcing reinitialization of some stuff? What do you think? Thanks Renato On Fri, Feb 22, 2019 at 5:08 PM Renato Poli <re...@gm...> wrote: > Is there any "close all" call? > > On Fri, Feb 22, 2019 at 5:03 PM John Peterson <jwp...@gm...> > wrote: > >> >> >> On Fri, Feb 22, 2019 at 1:53 PM Renato Poli <re...@gm...> wrote: >> >>> Hi John >>> >>> Thanks for the reply. >>> It seems that I moved one small step forward. >>> I added the coupling_functor as you advised - following >>> miscellaneous_ex9. >>> This way, I succeeded to solve the first timestep. >>> I found out I needed to reinit the equation_systems to get the coupling >>> updated. >>> >>> However, now I got stuck in the second timestep, as the >>> equation_systems.reinit() fails. >>> See below. >>> Why whould a v.closed() fail? >>> It doesn't make sense to me. >>> (I am running in a single processor...) >>> >> >> Even on one processor, we keep track of the closed/not closed state of >> vectors and check it before doing certain operations. Without line numbers >> in your stack trace, it's difficult to say exactly where the crash is >> coming from, but it's possible it's the following lines in petsc_vector.C: >> >> // FIXME: Workaround for a strange bug at large-scale. >> // If we have ghosting, PETSc lets us just copy the solution, and >> // doing so avoids a segfault? >> if (v_local_in.type() == GHOSTED && >> this->type() == PARALLEL) >> { >> v_local_in = *this; >> return; >> } >> >> This would mean that current_local_solution is not closed for some >> reason, but that's likely a bug in libmesh that we just haven't encountered >> because we haven't done what you are trying to do before... >> >> -- >> John >> > |
From: Renato P. <re...@gm...> - 2019-02-22 20:09:15
|
Is there any "close all" call? On Fri, Feb 22, 2019 at 5:03 PM John Peterson <jwp...@gm...> wrote: > > > On Fri, Feb 22, 2019 at 1:53 PM Renato Poli <re...@gm...> wrote: > >> Hi John >> >> Thanks for the reply. >> It seems that I moved one small step forward. >> I added the coupling_functor as you advised - following miscellaneous_ex9. >> This way, I succeeded to solve the first timestep. >> I found out I needed to reinit the equation_systems to get the coupling >> updated. >> >> However, now I got stuck in the second timestep, as the >> equation_systems.reinit() fails. >> See below. >> Why whould a v.closed() fail? >> It doesn't make sense to me. >> (I am running in a single processor...) >> > > Even on one processor, we keep track of the closed/not closed state of > vectors and check it before doing certain operations. Without line numbers > in your stack trace, it's difficult to say exactly where the crash is > coming from, but it's possible it's the following lines in petsc_vector.C: > > // FIXME: Workaround for a strange bug at large-scale. > // If we have ghosting, PETSc lets us just copy the solution, and > // doing so avoids a segfault? > if (v_local_in.type() == GHOSTED && > this->type() == PARALLEL) > { > v_local_in = *this; > return; > } > > This would mean that current_local_solution is not closed for some reason, > but that's likely a bug in libmesh that we just haven't encountered because > we haven't done what you are trying to do before... > > -- > John > |
From: John P. <jwp...@gm...> - 2019-02-22 20:03:32
|
On Fri, Feb 22, 2019 at 1:53 PM Renato Poli <re...@gm...> wrote: > Hi John > > Thanks for the reply. > It seems that I moved one small step forward. > I added the coupling_functor as you advised - following miscellaneous_ex9. > This way, I succeeded to solve the first timestep. > I found out I needed to reinit the equation_systems to get the coupling > updated. > > However, now I got stuck in the second timestep, as the > equation_systems.reinit() fails. > See below. > Why whould a v.closed() fail? > It doesn't make sense to me. > (I am running in a single processor...) > Even on one processor, we keep track of the closed/not closed state of vectors and check it before doing certain operations. Without line numbers in your stack trace, it's difficult to say exactly where the crash is coming from, but it's possible it's the following lines in petsc_vector.C: // FIXME: Workaround for a strange bug at large-scale. // If we have ghosting, PETSc lets us just copy the solution, and // doing so avoids a segfault? if (v_local_in.type() == GHOSTED && this->type() == PARALLEL) { v_local_in = *this; return; } This would mean that current_local_solution is not closed for some reason, but that's likely a bug in libmesh that we just haven't encountered because we haven't done what you are trying to do before... -- John |
From: Renato P. <re...@gm...> - 2019-02-22 00:54:50
|
Hi John, I am really stuck here... any clue would be helpful. Can you tell me if the pattern below is familiar? I could not get the "add_extra_ghost_elements" to work. I am using individual "add_constrain_row" to model the rigid BC (DOF_I = REFERENCE_DOF). LibMesh fails when I call "reinit_constaints", which internally calls my own "constrain" function to add the constrain_rows. This is what I see now: ... # Debug[1]: add_constraint_row: 3781 => 1287 (dof => dof) # Debug[1]: add_constraint_row: 3887 => 1287 # Debug[1]: add_constraint_row: 3927 => 1287 # Debug[1]: add_constraint_row: 4021 => 1287 # Debug[1]: add_constraint_row: 4025 => 1287 Assertion `item.first != expandable' failed. Stack frames: 9 0: libMesh::print_trace(std::ostream&) 1: libMesh::MacroFunctions::report_error(char const*, int, char const*, char const*) 2: libMesh::DofMap::process_constraints(libMesh::MeshBase&) 3: libMesh::System::reinit_constraints() Thanks, Renato On Wed, Feb 20, 2019 at 6:58 PM Renato Poli <re...@gm...> wrote: > Hi John, > > Should I consider "add_extra_ghost_elem"? > Then I add "row_constraints" between the DOFs I need to tie together and > this element. > Does that make sense? > If so, what element should I add? I just need a single DOF to tie many > DOFs together. > > thanks, > Renato > > On Wed, Feb 20, 2019 at 6:07 PM Renato Poli <re...@gm...> wrote: > >> Adding the mailing list in distribution... >> >> On Wed, Feb 20, 2019 at 6:03 PM Renato Poli <re...@gm...> wrote: >> >>> Thanks. >>> >>> Do you see a better way to do? >>> I can see that Abaqus uses an extra node to tie all dofs together. >>> I need the DOFs to be identical. >>> That means eliminating lines in the matrix so that the displacements are >>> identical to each other. >>> >>> Renato >>> >>> On Wed, Feb 20, 2019 at 5:53 PM John Peterson <jwp...@gm...> >>> wrote: >>> >>>> On Wed, Feb 20, 2019 at 2:43 PM Renato Poli <re...@gm...> wrote: >>>> >>>>> Hi all, >>>>> >>>>> Just refreshing this one, because I am sort of stuck in inserting a >>>>> "rigid" >>>>> BCs. >>>>> It seems simple, but I cannot make it work. >>>>> >>>> >>>> Hi, >>>> >>>> From your error message, it sounds like you are introducing a coupling >>>> (through the constraint) between two dofs which would otherwise not be >>>> coupled, and this causes a new nonzero to be inserted into the system >>>> matrix after preallocation. You may be able to work around this issue by >>>> augmenting the sparsity pattern using a GhostingFunctor, similar to what is >>>> done in miscellaneous/miscellaneous_ex9/augment_sparsity_on_interface.h. >>>> >>>> -- >>>> John >>>> >>>> |
From: Renato P. <re...@gm...> - 2019-02-20 21:59:03
|
Hi John, Should I consider "add_extra_ghost_elem"? Then I add "row_constraints" between the DOFs I need to tie together and this element. Does that make sense? If so, what element should I add? I just need a single DOF to tie many DOFs together. thanks, Renato On Wed, Feb 20, 2019 at 6:07 PM Renato Poli <re...@gm...> wrote: > Adding the mailing list in distribution... > > On Wed, Feb 20, 2019 at 6:03 PM Renato Poli <re...@gm...> wrote: > >> Thanks. >> >> Do you see a better way to do? >> I can see that Abaqus uses an extra node to tie all dofs together. >> I need the DOFs to be identical. >> That means eliminating lines in the matrix so that the displacements are >> identical to each other. >> >> Renato >> >> On Wed, Feb 20, 2019 at 5:53 PM John Peterson <jwp...@gm...> >> wrote: >> >>> On Wed, Feb 20, 2019 at 2:43 PM Renato Poli <re...@gm...> wrote: >>> >>>> Hi all, >>>> >>>> Just refreshing this one, because I am sort of stuck in inserting a >>>> "rigid" >>>> BCs. >>>> It seems simple, but I cannot make it work. >>>> >>> >>> Hi, >>> >>> From your error message, it sounds like you are introducing a coupling >>> (through the constraint) between two dofs which would otherwise not be >>> coupled, and this causes a new nonzero to be inserted into the system >>> matrix after preallocation. You may be able to work around this issue by >>> augmenting the sparsity pattern using a GhostingFunctor, similar to what is >>> done in miscellaneous/miscellaneous_ex9/augment_sparsity_on_interface.h. >>> >>> -- >>> John >>> >>> |
From: Renato P. <re...@gm...> - 2019-02-20 21:08:00
|
Adding the mailing list in distribution... On Wed, Feb 20, 2019 at 6:03 PM Renato Poli <re...@gm...> wrote: > Thanks. > > Do you see a better way to do? > I can see that Abaqus uses an extra node to tie all dofs together. > I need the DOFs to be identical. > That means eliminating lines in the matrix so that the displacements are > identical to each other. > > Renato > > On Wed, Feb 20, 2019 at 5:53 PM John Peterson <jwp...@gm...> > wrote: > >> On Wed, Feb 20, 2019 at 2:43 PM Renato Poli <re...@gm...> wrote: >> >>> Hi all, >>> >>> Just refreshing this one, because I am sort of stuck in inserting a >>> "rigid" >>> BCs. >>> It seems simple, but I cannot make it work. >>> >> >> Hi, >> >> From your error message, it sounds like you are introducing a coupling >> (through the constraint) between two dofs which would otherwise not be >> coupled, and this causes a new nonzero to be inserted into the system >> matrix after preallocation. You may be able to work around this issue by >> augmenting the sparsity pattern using a GhostingFunctor, similar to what is >> done in miscellaneous/miscellaneous_ex9/augment_sparsity_on_interface.h. >> >> -- >> John >> >> |
From: John P. <jwp...@gm...> - 2019-02-20 20:53:21
|
On Wed, Feb 20, 2019 at 2:43 PM Renato Poli <re...@gm...> wrote: > Hi all, > > Just refreshing this one, because I am sort of stuck in inserting a "rigid" > BCs. > It seems simple, but I cannot make it work. > Hi, >From your error message, it sounds like you are introducing a coupling (through the constraint) between two dofs which would otherwise not be coupled, and this causes a new nonzero to be inserted into the system matrix after preallocation. You may be able to work around this issue by augmenting the sparsity pattern using a GhostingFunctor, similar to what is done in miscellaneous/miscellaneous_ex9/augment_sparsity_on_interface.h. -- John |
From: Renato P. <re...@gm...> - 2019-02-20 20:43:27
|
Hi all, Just refreshing this one, because I am sort of stuck in inserting a "rigid" BCs. It seems simple, but I cannot make it work. Some help would be handy... Maybe adding an extra 'virtual' element would be the best idea? How can I do that? Thanks upfront. Renato On Mon, Feb 18, 2019 at 5:52 PM Renato Poli <re...@gm...> wrote: > Hi, > > I wish to have a rigid boundary condition. I am trying to do so with > add_constrain_row. > The idea would be to tie the Dofs to each other in the constrain funcion, > befor calling the solver. > Not sure it is the best idea, though. > Each processor adds a constrain row to the system regarding the elements > on its domain. > > I see this error: > > [2]PETSC ERROR: --------------------- Error Message > -------------------------------------------------------------- > [2]PETSC ERROR: Argument out of range > [2]PETSC ERROR: New nonzero at (72,1051) caused a malloc > Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn > off this check > [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html > for trouble shooting. > ... > > Should I consider adding MatSetOption? > Any advice? > > Thanks, > Renato > |
From: Renato P. <re...@gm...> - 2019-02-18 20:53:15
|
Hi, I wish to have a rigid boundary condition. I am trying to do so with add_constrain_row. The idea would be to tie the Dofs to each other in the constrain funcion, befor calling the solver. Not sure it is the best idea, though. Each processor adds a constrain row to the system regarding the elements on its domain. I see this error: [2]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [2]PETSC ERROR: Argument out of range [2]PETSC ERROR: New nonzero at (72,1051) caused a malloc Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check [2]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. ... Should I consider adding MatSetOption? Any advice? Thanks, Renato |
From: John P. <jwp...@gm...> - 2019-02-12 22:47:59
|
On Tue, Feb 12, 2019 at 12:35 PM Rovinelli, Andrea via Libmesh-users < lib...@li...> wrote: > Maybe this is a question that already has an answer, but I wasn't able to > find it. > I need to use the default OSX compiler because of the interaction with > external library. > I can do everything beside configuring libmesh with opnemp enabled, I > always have to use the flag --with-pthread-mode=none. > > I know that I can enable openmp in clang++ with hombrew libomp > > Does anyone know which are the exact parameters and flags to make it work > under a brand-new installation? > If I recall correctly, the default OSX clang doesn't support openmp. It's one of the reasons the MOOSE team builds their own compiler... -- John |
From: Rovinelli, A. <aro...@an...> - 2019-02-12 18:35:39
|
Maybe this is a question that already has an answer, but I wasn't able to find it. I need to use the default OSX compiler because of the interaction with external library. I can do everything beside configuring libmesh with opnemp enabled, I always have to use the flag --with-pthread-mode=none. I know that I can enable openmp in clang++ with hombrew libomp Does anyone know which are the exact parameters and flags to make it work under a brand-new installation? Thanks in advance |
From: Stogner, R. H <roy...@ic...> - 2019-02-08 13:51:34
|
On Thu, 7 Feb 2019, Griffith, Boyce Eugene wrote: > For ReplicatedMesh, does each processor get a contiguous range of element IDs, or can the distribution be more irregular than that? It can definitely be more irregular than that. We guarantee contiguous DoF IDs on each processor but never element or node ids. Even when we do renumbering, on ReplicatedMesh it generally just squeezes numbers down to fill in "gaps" created by adaptive coarsening or other element deletion. --- Roy |
From: John P. <jwp...@gm...> - 2019-02-07 23:45:34
|
On Thu, Feb 7, 2019 at 5:32 PM Griffith, Boyce Eugene <bo...@em...> wrote: > Folks -- > > For ReplicatedMesh, does each processor get a contiguous range of element > IDs, or can the distribution be more irregular than that? > I think this is only true for DistributedMesh. If you take a look at ReplicatedMesh::renumber_nodes_and_elements(), it doesn't really take processor_id() into account. You could test this by changing the processor ids of an existing, partitioned mesh and then calling renumber_nodes_and_elements() to see if anything changes... I don't think it will. -- John |
From: Griffith, B. E. <bo...@em...> - 2019-02-07 23:32:37
|
Folks -- For ReplicatedMesh, does each processor get a contiguous range of element IDs, or can the distribution be more irregular than that? Thanks! -- Boyce |
From: Stogner, R. H <roy...@ic...> - 2019-02-05 22:06:51
|
On Tue, 5 Feb 2019, Salazar De Troya, Miguel wrote: > What's the motivation to have the option unique_id enabled? Speed? Consistency? Paranoia. If you don't have a unique_id build then it should probably be fine to use plain ids and disable renumbering in the read code, I'd just like to have one less possible discrepancy to worry about. --- Roy > Thanks > Miguel > > On 2/5/19, 1:53 PM, "Stogner, Roy H" <roy...@ic...> wrote: > > > On Tue, 5 Feb 2019, Salazar De Troya, Miguel wrote: > > > How can I save the mesh with unique_ids? > > If you have unique_id enabled at configure time then it'll be > automatically saved when you write out xda/xdr/cpa/cpr. > > > Which format would be the best to save the refinement flags? > > ASCII, with one unique_id+refinement flag pair on each line. > --- > Roy > > > Thanks > > Miguel > > > > On 1/31/19, 12:10 PM, "Stogner, Roy H" <roy...@ic...> wrote: > > > > > > On Thu, 31 Jan 2019, Salazar De Troya, Miguel wrote: > > > > > Would it be possible for me to send you the mesh in CPR format just > > > right before calling test_unflagged() in > > > MeshRefinement::refine_and_coarsen_elements()? Would that help to > > > debug this issue? Not sure if the mesh in CPR actually stores the > > > flags for refinement. > > > > It doesn't, I'm afraid. I'd love it if you could send me something > > that reproduces the problem, but it wouldn't be that easy. Maybe save > > a .cpa (for ease of examination) with unique_ids and manually write > > out a separate file with refinement flags? > > --- > > Roy > > > > > > > > |
From: Salazar De T. M. <sal...@ll...> - 2019-02-05 22:00:02
|
What's the motivation to have the option unique_id enabled? Speed? Consistency? Thanks Miguel On 2/5/19, 1:53 PM, "Stogner, Roy H" <roy...@ic...> wrote: On Tue, 5 Feb 2019, Salazar De Troya, Miguel wrote: > How can I save the mesh with unique_ids? If you have unique_id enabled at configure time then it'll be automatically saved when you write out xda/xdr/cpa/cpr. > Which format would be the best to save the refinement flags? ASCII, with one unique_id+refinement flag pair on each line. --- Roy > Thanks > Miguel > > On 1/31/19, 12:10 PM, "Stogner, Roy H" <roy...@ic...> wrote: > > > On Thu, 31 Jan 2019, Salazar De Troya, Miguel wrote: > > > Would it be possible for me to send you the mesh in CPR format just > > right before calling test_unflagged() in > > MeshRefinement::refine_and_coarsen_elements()? Would that help to > > debug this issue? Not sure if the mesh in CPR actually stores the > > flags for refinement. > > It doesn't, I'm afraid. I'd love it if you could send me something > that reproduces the problem, but it wouldn't be that easy. Maybe save > a .cpa (for ease of examination) with unique_ids and manually write > out a separate file with refinement flags? > --- > Roy > > > |
From: Stogner, R. H <roy...@ic...> - 2019-02-05 21:53:35
|
On Tue, 5 Feb 2019, Salazar De Troya, Miguel wrote: > How can I save the mesh with unique_ids? If you have unique_id enabled at configure time then it'll be automatically saved when you write out xda/xdr/cpa/cpr. > Which format would be the best to save the refinement flags? ASCII, with one unique_id+refinement flag pair on each line. --- Roy > Thanks > Miguel > > On 1/31/19, 12:10 PM, "Stogner, Roy H" <roy...@ic...> wrote: > > > On Thu, 31 Jan 2019, Salazar De Troya, Miguel wrote: > > > Would it be possible for me to send you the mesh in CPR format just > > right before calling test_unflagged() in > > MeshRefinement::refine_and_coarsen_elements()? Would that help to > > debug this issue? Not sure if the mesh in CPR actually stores the > > flags for refinement. > > It doesn't, I'm afraid. I'd love it if you could send me something > that reproduces the problem, but it wouldn't be that easy. Maybe save > a .cpa (for ease of examination) with unique_ids and manually write > out a separate file with refinement flags? > --- > Roy > > > |
From: Salazar De T. M. <sal...@ll...> - 2019-02-05 21:45:27
|
How can I save the mesh with unique_ids? Which format would be the best to save the refinement flags? Thanks Miguel On 1/31/19, 12:10 PM, "Stogner, Roy H" <roy...@ic...> wrote: On Thu, 31 Jan 2019, Salazar De Troya, Miguel wrote: > Would it be possible for me to send you the mesh in CPR format just > right before calling test_unflagged() in > MeshRefinement::refine_and_coarsen_elements()? Would that help to > debug this issue? Not sure if the mesh in CPR actually stores the > flags for refinement. It doesn't, I'm afraid. I'd love it if you could send me something that reproduces the problem, but it wouldn't be that easy. Maybe save a .cpa (for ease of examination) with unique_ids and manually write out a separate file with refinement flags? --- Roy |
From: Li L. <li...@ka...> - 2019-02-03 12:56:47
|
Dear developers, I modified both parallel_sort.h and parallel_sort.C by using dof_id_type, the code successfully compiled, but it still stuck at the line Parallel::Sort<Hilbert::HilbertIndices> sorter (communicator, sorted_hilbert_keys); Now I disable the libHilbert, it works through anyway. May I know the largest parallel case ever attempted using libmesh? Thank you. Regards, Li Luo On Fri, Feb 1, 2019 at 12:01 AM Stogner, Roy H <roy...@ic...> wrote: > > On Thu, 31 Jan 2019, Li Luo wrote: > > > I changed from "typename IdxType=unsigned int" to "typename > IdxType=dof_id_type" in parallel_sort.h (libmesh_dir/include/parallel/), > and then > > reconfigure and make. Errors occur as follows: > > Ah, I forgot, you'd also need to change (or add new) instantiations at > the end of parallel_sort.C, from unsigned int to dof_id_type there as > well. > > > I am an old fan of libMesh and I am now still using libmesh0.9.3, > > I'm not sure what percent of our last ~1200 PRs were bug fixes for > bugs that old, but I bet you'll have better luck upgrading to master > than trying to find every applicable patch for 0.9.3. > --- > Roy > -- Postdoctoral Fellow Extreme Computing Research Center King Abdullah University of Science & Technology https://sites.google.com/site/rolyliluo/ -- This message and its contents, including attachments are intended solely for the original recipient. If you are not the intended recipient or have received this message in error, please notify me immediately and delete this message from your computer system. Any unauthorized use or distribution is prohibited. Please consider the environment before printing this email. |
From: Brandon D. <bld...@bu...> - 2019-02-02 17:24:01
|
Good Morning, I used the examples and have figured out some of the coding. I was hoping there was a primer explaining each of the entries for the version. Thank you. Brandon On Fri, Feb 1, 2019, 11:38 AM John Peterson <jwp...@gm... wrote: > > > On Fri, Feb 1, 2019 at 9:10 AM Stogner, Roy H <roy...@ic...> > wrote: > >> >> On Fri, 1 Feb 2019, Brandon Denton wrote: >> >> > I'm currently working through a mesh converter from FEMAP to .xda. Is >> there >> > a primer for the .xda file format? >> >> I'm afraid it's even worse than "no primer"; the XDA format changes >> without notice. We never really conceived of it as a stable >> "standard", just something that could let our users save files without >> trashing data (p refinement levels, exotic FE types, whatever) that >> might not be supported in pre-existing standards. We add new data >> every couple years: nodeset/BC/subdomain names and element unique_id >> values in 0.9.2, node unique_id values in 0.9.6, edge and shellface >> BCs in 1.1.0, more complicated field width encoding in 0.9.2 and again >> in 1.3.0... and we don't even bother adding write options for old >> formats, just backwards compatible reads via version string. >> >> Honestly the easiest way to write a mesh converter would probably be >> to write a libMesh MeshInput subclass and use our meshtool app. If >> you need something standalone, I'm sorry but the only authoritative >> source on the format is src/mesh/xdr_io.C >> > > Of the xda files that are checked in to the repository: > > examples/adaptivity/adaptivity_ex2/mesh.xda > examples/adaptivity/adaptivity_ex3/lshaped.xda > examples/adaptivity/adaptivity_ex3/lshaped3D.xda > examples/subdomains/subdomains_ex3/hybrid_3d.xda > examples/adjoints/adjoints_ex3/H_channel_quads.xda > examples/adjoints/adjoints_ex4/lshaped.xda > examples/adjoints/adjoints_ex1/lshaped.xda > examples/miscellaneous/miscellaneous_ex5/lshaped3D.xda > > One that was generated relatively recently is: > > examples/adaptivity/adaptivity_ex3/lshaped.xda > > so you could probably use that to reverse-engineer most of the current > file format. > > Also, I assume you are only interested in reading mesh data (nodes, > elements, connectivity, etc.)? Solution data uses a different file and > format from mesh data. > > -- > John > |
From: John P. <jwp...@gm...> - 2019-02-01 16:38:30
|
On Fri, Feb 1, 2019 at 9:10 AM Stogner, Roy H <roy...@ic...> wrote: > > On Fri, 1 Feb 2019, Brandon Denton wrote: > > > I'm currently working through a mesh converter from FEMAP to .xda. Is > there > > a primer for the .xda file format? > > I'm afraid it's even worse than "no primer"; the XDA format changes > without notice. We never really conceived of it as a stable > "standard", just something that could let our users save files without > trashing data (p refinement levels, exotic FE types, whatever) that > might not be supported in pre-existing standards. We add new data > every couple years: nodeset/BC/subdomain names and element unique_id > values in 0.9.2, node unique_id values in 0.9.6, edge and shellface > BCs in 1.1.0, more complicated field width encoding in 0.9.2 and again > in 1.3.0... and we don't even bother adding write options for old > formats, just backwards compatible reads via version string. > > Honestly the easiest way to write a mesh converter would probably be > to write a libMesh MeshInput subclass and use our meshtool app. If > you need something standalone, I'm sorry but the only authoritative > source on the format is src/mesh/xdr_io.C > Of the xda files that are checked in to the repository: examples/adaptivity/adaptivity_ex2/mesh.xda examples/adaptivity/adaptivity_ex3/lshaped.xda examples/adaptivity/adaptivity_ex3/lshaped3D.xda examples/subdomains/subdomains_ex3/hybrid_3d.xda examples/adjoints/adjoints_ex3/H_channel_quads.xda examples/adjoints/adjoints_ex4/lshaped.xda examples/adjoints/adjoints_ex1/lshaped.xda examples/miscellaneous/miscellaneous_ex5/lshaped3D.xda One that was generated relatively recently is: examples/adaptivity/adaptivity_ex3/lshaped.xda so you could probably use that to reverse-engineer most of the current file format. Also, I assume you are only interested in reading mesh data (nodes, elements, connectivity, etc.)? Solution data uses a different file and format from mesh data. -- John |
From: Stogner, R. H <roy...@ic...> - 2019-02-01 15:10:23
|
On Fri, 1 Feb 2019, Brandon Denton wrote: > I'm currently working through a mesh converter from FEMAP to .xda. Is there > a primer for the .xda file format? I'm afraid it's even worse than "no primer"; the XDA format changes without notice. We never really conceived of it as a stable "standard", just something that could let our users save files without trashing data (p refinement levels, exotic FE types, whatever) that might not be supported in pre-existing standards. We add new data every couple years: nodeset/BC/subdomain names and element unique_id values in 0.9.2, node unique_id values in 0.9.6, edge and shellface BCs in 1.1.0, more complicated field width encoding in 0.9.2 and again in 1.3.0... and we don't even bother adding write options for old formats, just backwards compatible reads via version string. Honestly the easiest way to write a mesh converter would probably be to write a libMesh MeshInput subclass and use our meshtool app. If you need something standalone, I'm sorry but the only authoritative source on the format is src/mesh/xdr_io.C --- Roy |
From: Brandon D. <bld...@bu...> - 2019-02-01 13:33:41
|
Good Morning, I'm currently working through a mesh converter from FEMAP to .xda. Is there a primer for the .xda file format? Thank you. Brandon |
From: Stogner, R. H <roy...@ic...> - 2019-01-31 21:01:13
|
On Thu, 31 Jan 2019, Li Luo wrote: > I changed from "typename IdxType=unsigned int" to "typename IdxType=dof_id_type" in parallel_sort.h (libmesh_dir/include/parallel/), and then > reconfigure and make. Errors occur as follows: Ah, I forgot, you'd also need to change (or add new) instantiations at the end of parallel_sort.C, from unsigned int to dof_id_type there as well. > I am an old fan of libMesh and I am now still using libmesh0.9.3, I'm not sure what percent of our last ~1200 PRs were bug fixes for bugs that old, but I bet you'll have better luck upgrading to master than trying to find every applicable patch for 0.9.3. --- Roy |