You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: Manav B. <bha...@gm...> - 2019-05-29 18:02:12
|
Hi, Is there a quick way to get the non-zero locations of a matrix? I am using libMesh with PETSc. I am not sure if there is one in libMesh. The function in PETSc seems to only work for sequential matrices (https://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Mat/MatGetRowIJ.html). Regards, Manav |
From: Stogner, R. H <roy...@ic...> - 2019-05-28 21:45:30
|
On Mon, 27 May 2019, Alexander Lindsay wrote: > Does it make sense/do we have the machinery to do periodic boundary > conditions with discontinuous variables like the user below is asking for? We don't even do internal continuity with discontinuous variables; naturally we're not going to do continuity across domain boundaries. Typically if you're using a discontinuous variable you expect discontinuities, right? If you actually do try to pin the value of the variable to be exactly equal from one side of an interface to the other, you end up killing your convergence: at best you've created an effective "element" of size 2h and at worst you've screwed up consistency. If you have a mesh that matches up across a periodic boundary (as you have to for libMesh PeriodicBoundary stuff to make sense even in the C0 case), then ideally what you want to do is add interface terms to your formulation to give the same sort of weak continuity enforcement across boundary sides that you'd normally use between interior neighbors; basically instead of skipping boundary sides in that loop you check to see if they're periodic boundary sides and you use the periodic neighbor for whatever jump/flux/etc terms are in your weak equations. > Maybe this is a use case for a face-face type discretization like mortar... If you don't have a mesh that matches up perfectly across the boundary, then I think mortar methods may be the way to go. --- Roy |
From: John P. <jwp...@gm...> - 2019-05-28 13:30:42
|
On Mon, May 27, 2019 at 12:39 PM Amneet Bhalla <mai...@gm...> wrote: > Hi Guys, > > Is there a way to generate a triangle shaped object (and eventually > triangulate it) using libMesh directly? > Yes, this is definitely possible if you configure libmesh with --disable-strict-lgpl so that Triangle support is available. See misc_ex6 for a simple example of how this might be accomplished. -- John |
From: Alexander L. <ale...@gm...> - 2019-05-27 20:38:56
|
Does it make sense/do we have the machinery to do periodic boundary conditions with discontinuous variables like the user below is asking for? In the PeriodicBoundary class header, I see discussion of nodes which makes sense...but doesn't promising for this guy. Maybe this is a use case for a face-face type discretization like mortar... ---------- Forwarded message --------- From: Daniel Wojtaszek <dan...@cn...> Date: Fri, May 24, 2019 at 9:53 AM Subject: Periodic boundary condition for L2_Lagrange variable? To: moose-users <moo...@go...> Is there a way to have periodic boundary conditions on an L2_Lagrange variable? Using the following BC works for Lagrange variable but not L2_Lagrange: [BCs] [./Periodic] [./top] variable = u primary = top secondary = bottom translation ='0 0 -15' [../] [../] [] Cheers. Dan -- You received this message because you are subscribed to the Google Groups "moose-users" group. To unsubscribe from this group and stop receiving emails from it, send an email to moo...@go.... Visit this group at https://groups.google.com/group/moose-users. To view this discussion on the web visit https://groups.google.com/d/msgid/moose-users/239cecf9-6652-41e7-83d4-33c27c0aa27f%40googlegroups.com <https://groups.google.com/d/msgid/moose-users/239cecf9-6652-41e7-83d4-33c27c0aa27f%40googlegroups.com?utm_medium=email&utm_source=footer> . For more options, visit https://groups.google.com/d/optout. |
From: David K. <dav...@ak...> - 2019-05-27 18:18:03
|
The example is implemented based on non-dimensional quantities, so you'd have to re-dimensionalize in order to figure out the physical displacements. You could do that if you like, it's just a matter of applying appropriate scaling. Best, David On Mon, May 27, 2019 at 12:29 PM Nikrouz <nik...@gm...> wrote: > Dear all libMesh users > > I am new to finite element. I have run the linear Elastic Cantilever > Example4(System of Equation). > ( > https://github.com/libMesh/libmesh/tree/master/examples/systems_of_equations/systems_of_equations_ex4 > ) > > When I visualize the results, the value of u v are too big compared > with size of the Cantilever. > > The dimensions of Cantilever is 1.0 x 0.2 while the maximum value of v > is 94,80,.. for example. > > are the results meaningful? I can not visualize the displacement > vector(u*iHat+v*jHat) to see the deformed body. > > , Thank you, > > > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Amneet B. <mai...@gm...> - 2019-05-27 17:39:22
|
Hi Guys, Is there a way to generate a triangle shaped object (and eventually triangulate it) using libMesh directly? Thanks, —Amneet -- --Amneet |
From: Nikrouz <nik...@gm...> - 2019-05-27 16:29:21
|
Dear all libMesh users I am new to finite element. I have run the linear Elastic Cantilever Example4(System of Equation). (https://github.com/libMesh/libmesh/tree/master/examples/systems_of_equations/systems_of_equations_ex4) When I visualize the results, the value of u v are too big compared with size of the Cantilever. The dimensions of Cantilever is 1.0 x 0.2 while the maximum value of v is 94,80,.. for example. are the results meaningful? I can not visualize the displacement vector(u*iHat+v*jHat) to see the deformed body. , Thank you, |
From: Paul T. B. <ptb...@gm...> - 2019-05-24 14:53:31
|
On Fri, May 24, 2019 at 10:44 AM Nikrouz <nik...@gm...> wrote: > Thank you for your answer, I will try to apply what you said in my case. > > A basic question, what is the difference between plain examples and > FEMSystem examples totally? > FEMSystem is a framework within libMesh to facilitate constructing FEM programs. So things like constructing the finite element objects, looping over elements (and the parallelism therein) are all handled behind the scenes and you just provide element-wise evaluations. GRINS ( https://grinsfem.github.io) is built on this and we have a paper on it ( https://epubs.siam.org/doi/abs/10.1137/15M1026110). My suggestion would be to start with the "plain" examples since that gives you the most flexibility to do exactly what you want to do and can get you started and then if you're interested, we can help you migrate to FEMSystem. But that's just a suggestion. Let us know if we can answer more questions. Best, Paul > Thank you again for your guidance. > On 2019-05-24 8:47 a.m., Paul T. Bauman wrote: > > Hello, > > On Thu, May 23, 2019 at 3:45 PM Nikrouz <nik...@gm...> wrote: > >> Dear All libMesh uses, >> >> I want to apply traction boundary condition on two of the surfaces of my >> geometry. As far as I know, There are two methods >> >> for defining such a boundary condition: *Using penalty method* > > > The penalty method is used for enforcing Dirichlet boundary conditions. > > >> and >> *side_time_derivative method*. > > > This is correct, but it is used within the FEMSystem framework. You'd want > to refer to the fem_system examples in this case. > > >> I have a couple of questions: >> >> >> 1- Which method is more straightforward for defining boundary >> condition(traction) in libMesh? >> > > Please have a look at the following example: > https://github.com/libMesh/libmesh/blob/master/examples/systems_of_equations/systems_of_equations_ex6/systems_of_equations_ex6.C > > It is a "plain" (non FEMSystem) example. If you look in the assemble() > function, you will see that following the element interior loop, there is a > loop over boundary elements and then the traction is applied appropriately. > > HTH, > > Paul > > >> 2- I have searched different examples in libMesh and I could not find >> any case that has used side_time_derivative method(for example defined >> in fem-ex3). Is there any example illustrates how can I call the >> mentioned method in the code? Where should I call it?How? >> >> In the example fem3 example, the method need two inputs: >> >> *bool ElasticitySystem::side_time_derivative (bool request_jacobian,** >> ** DiffContext & context)* >> >> *FEMContext & c = cast_ref<FEMContext &>(context); >> >> // If we're on the correct side, apply the traction >> if (c.has_side_boundary_id(BOUNDARY_ID_MAX_X)) >> { >> * >> >> *......* >> >> >> How and where should I use this method in the code to have a traction >> boundary condition? >> >> Thank you everybody! >> >> >> _______________________________________________ >> Libmesh-users mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > |
From: Paul T. B. <ptb...@gm...> - 2019-05-24 12:47:39
|
Hello, On Thu, May 23, 2019 at 3:45 PM Nikrouz <nik...@gm...> wrote: > Dear All libMesh uses, > > I want to apply traction boundary condition on two of the surfaces of my > geometry. As far as I know, There are two methods > > for defining such a boundary condition: *Using penalty method* The penalty method is used for enforcing Dirichlet boundary conditions. > and > *side_time_derivative method*. This is correct, but it is used within the FEMSystem framework. You'd want to refer to the fem_system examples in this case. > I have a couple of questions: > > > 1- Which method is more straightforward for defining boundary > condition(traction) in libMesh? > Please have a look at the following example: https://github.com/libMesh/libmesh/blob/master/examples/systems_of_equations/systems_of_equations_ex6/systems_of_equations_ex6.C It is a "plain" (non FEMSystem) example. If you look in the assemble() function, you will see that following the element interior loop, there is a loop over boundary elements and then the traction is applied appropriately. HTH, Paul > 2- I have searched different examples in libMesh and I could not find > any case that has used side_time_derivative method(for example defined > in fem-ex3). Is there any example illustrates how can I call the > mentioned method in the code? Where should I call it?How? > > In the example fem3 example, the method need two inputs: > > *bool ElasticitySystem::side_time_derivative (bool request_jacobian,** > ** DiffContext & context)* > > *FEMContext & c = cast_ref<FEMContext &>(context); > > // If we're on the correct side, apply the traction > if (c.has_side_boundary_id(BOUNDARY_ID_MAX_X)) > { > * > > *......* > > > How and where should I use this method in the code to have a traction > boundary condition? > > Thank you everybody! > > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Nikrouz <nik...@gm...> - 2019-05-23 19:45:30
|
Dear All libMesh uses, I want to apply traction boundary condition on two of the surfaces of my geometry. As far as I know, There are two methods for defining such a boundary condition: *Using penalty method* and *side_time_derivative method*. I have a couple of questions: 1- Which method is more straightforward for defining boundary condition(traction) in libMesh? 2- I have searched different examples in libMesh and I could not find any case that has used side_time_derivative method(for example defined in fem-ex3). Is there any example illustrates how can I call the mentioned method in the code? Where should I call it?How? In the example fem3 example, the method need two inputs: *bool ElasticitySystem::side_time_derivative (bool request_jacobian,** ** DiffContext & context)* *FEMContext & c = cast_ref<FEMContext &>(context); // If we're on the correct side, apply the traction if (c.has_side_boundary_id(BOUNDARY_ID_MAX_X)) { * *......* How and where should I use this method in the code to have a traction boundary condition? Thank you everybody! |
From: John P. <jwp...@gm...> - 2019-05-22 15:03:30
|
On Wed, May 22, 2019 at 3:30 AM marco <in...@gm...> wrote: > Mass lumping is defined by using trapezoidal quadrature rule. > In this way, the weights of a trapezoidal rule should be the integral over > the element of each basis function. > > I noticed that this is not always true for hexahedral elements. Am I > missing something or could there be a bug? > We form the trapezoidal rule for tri-linear hexahedral elements by taking the tensor product of the 1D trapezoidal rule, which simply has all weights = 1. This results in all weights equal to 1 for the HEX8. Are you talking about the (tri-quadratic) HEX27? For those elements I think you should use a QSimpson rule which are also formed from tensor products, but I don't know if that results in your statement, > the weights of a trapezoidal rule should be the integral over the element of each basis function. being satisfied. I have not heard of this condition before, but it seems plausible... I would have to look into it further. -- John |
From: marco <in...@gm...> - 2019-05-22 08:30:18
|
Mass lumping is defined by using trapezoidal quadrature rule. In this way, the weights of a trapezoidal rule should be the integral over the element of each basis function. I noticed that this is not always true for hexahedral elements. Am I missing something or could there be a bug? Thank you |
From: Yuxiang W. <yw...@vi...> - 2019-05-21 20:51:17
|
Thank you Derek! I actually looked into MOOSE before my own implementation but then I found this: https://groups.google.com/forum/#!searchin/moose-users/central$20difference%7Csort:date/moose-users/FJnjdc7u6Xc/juFZpdXMBwAJ Let me follow up the thread on MOOSE and continue there :) Best, Shawn On Tue, May 21, 2019 at 5:37 AM Derek Gaston <fri...@gm...> wrote: > Yep - vector_mult, reciprocal, and pointwise_mult are how we do explicit > updating in MOOSE: > > > https://github.com/idaholab/moose/blob/next/framework/src/timeintegrators/ActuallyExplicitEuler.C#L182 > > Derek > > > On Mon, May 20, 2019 at 2:38 PM Yuxiang Wang <yw...@vi...> wrote: > >> Thank you Jed & John! That's extremely helpful. >> >> John - thank you so much for the insight and pointer to reciprocal() and >> pointwise_mult()! As for adding additional NumericVector to the system - >> do >> you happen to know any example or code snippet that did this? What I am >> doing now is creating a new ExplicitSystem and using its solution vector >> as >> the storage, which handles the initialization quite nicely but maybe is an >> overkill. >> >> Best, >> Shawn >> >> On Mon, May 20, 2019 at 7:11 AM Jed Brown <je...@je...> wrote: >> >> > John Peterson <jwp...@gm...> writes: >> > >> > > On Sun, May 19, 2019 at 9:15 PM Yuxiang Wang <yw...@vi...> >> wrote: >> > > >> > >> Dear all, >> > >> >> > >> Sorry for the spam and being new to the field. >> > >> >> > >> I am currently trying to implement an elastodynamics problem with >> > explicit >> > >> method (central difference method, to be specific). I am planning to >> use >> > >> lumped mass matrix (and Rayleigh damping when needed), so the system >> > matrix >> > >> will be simply a diagonal matrix. Solving is therefore trivial - I >> just >> > >> need to do per-element division of the rhs by the diagonal entries to >> > get >> > >> the solution vector. >> > >> >> > >> For this problem, I have the option of just treating it as an >> implicit >> > >> system - fill the system.matrix with only diagonal components and >> call >> > the >> > >> PETSc LU solver. This is very easy thanks to a lot of the libmesh >> > >> infrastructure available. If I do so, should I be concerned about a >> > >> significant slowdown? Or would PETSc be smart enough to realize that >> > this >> > >> is already diagonal matrix and be efficient in solving it? >> > >> >> > > >> > > Yes, I'm pretty sure this will be significantly slower than doing a >> > > "manual" inversion by taking the reciprocal of the matrix diagonal. I >> > don't >> > > know of anything in PETSc's LU solver that will detect this particular >> > > special case. For an explicit code where you want every timestep to >> be as >> > > fast as possible, it will likely be prohibitively slow. >> > >> > If you have a sparse matrix with only the diagonal nonzero, then LU will >> > create no fill and it'll actually be pretty fast (likely hard to measure >> > compared to the cost of evaluating the explicit part), but -pc_type >> > jacobi would also be an exact solver and is more precisely what you >> > need. Note that the default bjacobi/ilu is also an exact solver in this >> > circumstance, as are many other preconditioners. >> > >> > > My other choice would be to create a NumericVector myself to store the >> > >> diagonal system matrix entries, and perform the per-element division. >> > This >> > >> would take more work and will not be using the already well-tested >> > libmesh >> > >> infrastructure, so I am trying to see whether I can get away with >> doing >> > >> this without compromising on the performance. >> > > >> > > >> > > This would probably work best. You can add additional vectors to >> Systems >> > > and assemble into them similarly to the way you assemble the >> right-hand >> > > side vector. Also note that NumericVectors have the reciprocal() API, >> > which >> > > will allow you to quickly compute the inverse, as well as >> > pointwise_mult() >> > > which should allow you to quickly apply it. >> > > >> > > -- >> > > John >> > > >> > > _______________________________________________ >> > > Libmesh-users mailing list >> > > Lib...@li... >> > > https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > >> >> >> -- >> Yuxiang "Shawn" Wang, PhD >> yw...@vi... >> +1 (434) 284-0836 >> >> _______________________________________________ >> Libmesh-users mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > -- Yuxiang "Shawn" Wang, PhD yw...@vi... +1 (434) 284-0836 |
From: Derek G. <fri...@gm...> - 2019-05-21 12:37:27
|
Yep - vector_mult, reciprocal, and pointwise_mult are how we do explicit updating in MOOSE: https://github.com/idaholab/moose/blob/next/framework/src/timeintegrators/ActuallyExplicitEuler.C#L182 Derek On Mon, May 20, 2019 at 2:38 PM Yuxiang Wang <yw...@vi...> wrote: > Thank you Jed & John! That's extremely helpful. > > John - thank you so much for the insight and pointer to reciprocal() and > pointwise_mult()! As for adding additional NumericVector to the system - do > you happen to know any example or code snippet that did this? What I am > doing now is creating a new ExplicitSystem and using its solution vector as > the storage, which handles the initialization quite nicely but maybe is an > overkill. > > Best, > Shawn > > On Mon, May 20, 2019 at 7:11 AM Jed Brown <je...@je...> wrote: > > > John Peterson <jwp...@gm...> writes: > > > > > On Sun, May 19, 2019 at 9:15 PM Yuxiang Wang <yw...@vi...> > wrote: > > > > > >> Dear all, > > >> > > >> Sorry for the spam and being new to the field. > > >> > > >> I am currently trying to implement an elastodynamics problem with > > explicit > > >> method (central difference method, to be specific). I am planning to > use > > >> lumped mass matrix (and Rayleigh damping when needed), so the system > > matrix > > >> will be simply a diagonal matrix. Solving is therefore trivial - I > just > > >> need to do per-element division of the rhs by the diagonal entries to > > get > > >> the solution vector. > > >> > > >> For this problem, I have the option of just treating it as an implicit > > >> system - fill the system.matrix with only diagonal components and call > > the > > >> PETSc LU solver. This is very easy thanks to a lot of the libmesh > > >> infrastructure available. If I do so, should I be concerned about a > > >> significant slowdown? Or would PETSc be smart enough to realize that > > this > > >> is already diagonal matrix and be efficient in solving it? > > >> > > > > > > Yes, I'm pretty sure this will be significantly slower than doing a > > > "manual" inversion by taking the reciprocal of the matrix diagonal. I > > don't > > > know of anything in PETSc's LU solver that will detect this particular > > > special case. For an explicit code where you want every timestep to be > as > > > fast as possible, it will likely be prohibitively slow. > > > > If you have a sparse matrix with only the diagonal nonzero, then LU will > > create no fill and it'll actually be pretty fast (likely hard to measure > > compared to the cost of evaluating the explicit part), but -pc_type > > jacobi would also be an exact solver and is more precisely what you > > need. Note that the default bjacobi/ilu is also an exact solver in this > > circumstance, as are many other preconditioners. > > > > > My other choice would be to create a NumericVector myself to store the > > >> diagonal system matrix entries, and perform the per-element division. > > This > > >> would take more work and will not be using the already well-tested > > libmesh > > >> infrastructure, so I am trying to see whether I can get away with > doing > > >> this without compromising on the performance. > > > > > > > > > This would probably work best. You can add additional vectors to > Systems > > > and assemble into them similarly to the way you assemble the right-hand > > > side vector. Also note that NumericVectors have the reciprocal() API, > > which > > > will allow you to quickly compute the inverse, as well as > > pointwise_mult() > > > which should allow you to quickly apply it. > > > > > > -- > > > John > > > > > > _______________________________________________ > > > Libmesh-users mailing list > > > Lib...@li... > > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > > > > -- > Yuxiang "Shawn" Wang, PhD > yw...@vi... > +1 (434) 284-0836 > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Yuxiang W. <yw...@vi...> - 2019-05-20 20:38:35
|
Thank you Jed & John! That's extremely helpful. John - thank you so much for the insight and pointer to reciprocal() and pointwise_mult()! As for adding additional NumericVector to the system - do you happen to know any example or code snippet that did this? What I am doing now is creating a new ExplicitSystem and using its solution vector as the storage, which handles the initialization quite nicely but maybe is an overkill. Best, Shawn On Mon, May 20, 2019 at 7:11 AM Jed Brown <je...@je...> wrote: > John Peterson <jwp...@gm...> writes: > > > On Sun, May 19, 2019 at 9:15 PM Yuxiang Wang <yw...@vi...> wrote: > > > >> Dear all, > >> > >> Sorry for the spam and being new to the field. > >> > >> I am currently trying to implement an elastodynamics problem with > explicit > >> method (central difference method, to be specific). I am planning to use > >> lumped mass matrix (and Rayleigh damping when needed), so the system > matrix > >> will be simply a diagonal matrix. Solving is therefore trivial - I just > >> need to do per-element division of the rhs by the diagonal entries to > get > >> the solution vector. > >> > >> For this problem, I have the option of just treating it as an implicit > >> system - fill the system.matrix with only diagonal components and call > the > >> PETSc LU solver. This is very easy thanks to a lot of the libmesh > >> infrastructure available. If I do so, should I be concerned about a > >> significant slowdown? Or would PETSc be smart enough to realize that > this > >> is already diagonal matrix and be efficient in solving it? > >> > > > > Yes, I'm pretty sure this will be significantly slower than doing a > > "manual" inversion by taking the reciprocal of the matrix diagonal. I > don't > > know of anything in PETSc's LU solver that will detect this particular > > special case. For an explicit code where you want every timestep to be as > > fast as possible, it will likely be prohibitively slow. > > If you have a sparse matrix with only the diagonal nonzero, then LU will > create no fill and it'll actually be pretty fast (likely hard to measure > compared to the cost of evaluating the explicit part), but -pc_type > jacobi would also be an exact solver and is more precisely what you > need. Note that the default bjacobi/ilu is also an exact solver in this > circumstance, as are many other preconditioners. > > > My other choice would be to create a NumericVector myself to store the > >> diagonal system matrix entries, and perform the per-element division. > This > >> would take more work and will not be using the already well-tested > libmesh > >> infrastructure, so I am trying to see whether I can get away with doing > >> this without compromising on the performance. > > > > > > This would probably work best. You can add additional vectors to Systems > > and assemble into them similarly to the way you assemble the right-hand > > side vector. Also note that NumericVectors have the reciprocal() API, > which > > will allow you to quickly compute the inverse, as well as > pointwise_mult() > > which should allow you to quickly apply it. > > > > -- > > John > > > > _______________________________________________ > > Libmesh-users mailing list > > Lib...@li... > > https://lists.sourceforge.net/lists/listinfo/libmesh-users > -- Yuxiang "Shawn" Wang, PhD yw...@vi... +1 (434) 284-0836 |
From: Jed B. <je...@je...> - 2019-05-20 13:54:28
|
John Peterson <jwp...@gm...> writes: > On Sun, May 19, 2019 at 9:15 PM Yuxiang Wang <yw...@vi...> wrote: > >> Dear all, >> >> Sorry for the spam and being new to the field. >> >> I am currently trying to implement an elastodynamics problem with explicit >> method (central difference method, to be specific). I am planning to use >> lumped mass matrix (and Rayleigh damping when needed), so the system matrix >> will be simply a diagonal matrix. Solving is therefore trivial - I just >> need to do per-element division of the rhs by the diagonal entries to get >> the solution vector. >> >> For this problem, I have the option of just treating it as an implicit >> system - fill the system.matrix with only diagonal components and call the >> PETSc LU solver. This is very easy thanks to a lot of the libmesh >> infrastructure available. If I do so, should I be concerned about a >> significant slowdown? Or would PETSc be smart enough to realize that this >> is already diagonal matrix and be efficient in solving it? >> > > Yes, I'm pretty sure this will be significantly slower than doing a > "manual" inversion by taking the reciprocal of the matrix diagonal. I don't > know of anything in PETSc's LU solver that will detect this particular > special case. For an explicit code where you want every timestep to be as > fast as possible, it will likely be prohibitively slow. If you have a sparse matrix with only the diagonal nonzero, then LU will create no fill and it'll actually be pretty fast (likely hard to measure compared to the cost of evaluating the explicit part), but -pc_type jacobi would also be an exact solver and is more precisely what you need. Note that the default bjacobi/ilu is also an exact solver in this circumstance, as are many other preconditioners. > My other choice would be to create a NumericVector myself to store the >> diagonal system matrix entries, and perform the per-element division. This >> would take more work and will not be using the already well-tested libmesh >> infrastructure, so I am trying to see whether I can get away with doing >> this without compromising on the performance. > > > This would probably work best. You can add additional vectors to Systems > and assemble into them similarly to the way you assemble the right-hand > side vector. Also note that NumericVectors have the reciprocal() API, which > will allow you to quickly compute the inverse, as well as pointwise_mult() > which should allow you to quickly apply it. > > -- > John > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: John P. <jwp...@gm...> - 2019-05-20 13:42:36
|
On Sun, May 19, 2019 at 9:15 PM Yuxiang Wang <yw...@vi...> wrote: > Dear all, > > Sorry for the spam and being new to the field. > > I am currently trying to implement an elastodynamics problem with explicit > method (central difference method, to be specific). I am planning to use > lumped mass matrix (and Rayleigh damping when needed), so the system matrix > will be simply a diagonal matrix. Solving is therefore trivial - I just > need to do per-element division of the rhs by the diagonal entries to get > the solution vector. > > For this problem, I have the option of just treating it as an implicit > system - fill the system.matrix with only diagonal components and call the > PETSc LU solver. This is very easy thanks to a lot of the libmesh > infrastructure available. If I do so, should I be concerned about a > significant slowdown? Or would PETSc be smart enough to realize that this > is already diagonal matrix and be efficient in solving it? > Yes, I'm pretty sure this will be significantly slower than doing a "manual" inversion by taking the reciprocal of the matrix diagonal. I don't know of anything in PETSc's LU solver that will detect this particular special case. For an explicit code where you want every timestep to be as fast as possible, it will likely be prohibitively slow. My other choice would be to create a NumericVector myself to store the > diagonal system matrix entries, and perform the per-element division. This > would take more work and will not be using the already well-tested libmesh > infrastructure, so I am trying to see whether I can get away with doing > this without compromising on the performance. This would probably work best. You can add additional vectors to Systems and assemble into them similarly to the way you assemble the right-hand side vector. Also note that NumericVectors have the reciprocal() API, which will allow you to quickly compute the inverse, as well as pointwise_mult() which should allow you to quickly apply it. -- John |
From: Yuxiang W. <yw...@vi...> - 2019-05-20 02:14:56
|
Dear all, Sorry for the spam and being new to the field. I am currently trying to implement an elastodynamics problem with explicit method (central difference method, to be specific). I am planning to use lumped mass matrix (and Rayleigh damping when needed), so the system matrix will be simply a diagonal matrix. Solving is therefore trivial - I just need to do per-element division of the rhs by the diagonal entries to get the solution vector. For this problem, I have the option of just treating it as an implicit system - fill the system.matrix with only diagonal components and call the PETSc LU solver. This is very easy thanks to a lot of the libmesh infrastructure available. If I do so, should I be concerned about a significant slowdown? Or would PETSc be smart enough to realize that this is already diagonal matrix and be efficient in solving it? My other choice would be to create a NumericVector myself to store the diagonal system matrix entries, and perform the per-element division. This would take more work and will not be using the already well-tested libmesh infrastructure, so I am trying to see whether I can get away with doing this without compromising on the performance. Feedback would be very appreciated. Thank you! Best, Shawn -- Yuxiang "Shawn" Wang, PhD yw...@vi... +1 (434) 284-0836 |
From: Renato P. <re...@gm...> - 2019-05-19 15:17:35
|
Hi, I see the following messages when writing a Exodus file. It happens in the 470th step, the previous ones run fine. I write two files after every step (two meshes). Does it look like a syncronization issue? ---- (_g_step_results:470) Error opening ExodusII mesh file: test-M1000-F0-L45-H60-R5.e.g_step.dfn.e [3] src/mesh/exodusII_io_helper.C, line 362, compiled May 18 2019 at 21:31:00 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 3 ----- This is what happens around line 362 of exodusII_io_helper.C: 355 ex_id = exII::ex_open(filename, 356 read_only ? EX_READ : EX_WRITE, 357 &comp_ws, 358 &io_ws, 359 &ex_version); 360 361 std::string err_msg = std::string("Error opening ExodusII mesh file: ") + std::string(filename); 362 EX_CHECK_ERR(ex_id, err_msg); 363 if (verbose) libMesh::out << "File opened successfully." << std::endl; 364 |
From: Alexander L. <ale...@gm...> - 2019-05-16 23:03:24
|
Awesome, thanks Roy. On Tue, May 14, 2019 at 2:36 PM Stogner, Roy H <roy...@ic...> wrote: > > On Mon, 13 May 2019, Alexander Lindsay wrote: > > > I could maybe think of a libmesh-only case, but if you're willing to > > run MOOSE, you can run the > > moose/test/tests/geomsearch/nearest_node_locator/nearest_node_locator.i > > input file. It appears to reproduce the error every time (with > > --n-threads=2). > > I'm definitely managing to trigger libmesh-only errors with threaded > projections too, and the assertion failure I'm getting is more helpful > than that helgrind error, so I'm working from that angle first; when I > get a patch ready I'll let you know and if it doesn't fix your case > too then I'll try reproducing with MOOSE next. > > Thanks, > --- > Roy > |
From: Stogner, R. H <roy...@ic...> - 2019-05-14 20:36:20
|
On Mon, 13 May 2019, Alexander Lindsay wrote: > I could maybe think of a libmesh-only case, but if you're willing to > run MOOSE, you can run the > moose/test/tests/geomsearch/nearest_node_locator/nearest_node_locator.i > input file. It appears to reproduce the error every time (with > --n-threads=2). I'm definitely managing to trigger libmesh-only errors with threaded projections too, and the assertion failure I'm getting is more helpful than that helgrind error, so I'm working from that angle first; when I get a patch ready I'll let you know and if it doesn't fix your case too then I'll try reproducing with MOOSE next. Thanks, --- Roy |
From: John P. <jwp...@gm...> - 2019-05-13 21:58:58
|
On Mon, May 13, 2019 at 4:45 PM Alexander Lindsay <ale...@gm...> wrote: > I could maybe think of a libmesh-only case, but if you're willing to run > MOOSE, you can run the > moose/test/tests/geomsearch/nearest_node_locator/nearest_node_locator.i > input file. It appears to reproduce the error every time (with > --n-threads=2). > > You're probably already aware of this, but for helgrind to be useful, you > need to compile libMesh with --with-thread-model=pthread **and** you need > to manually ensure that you don't define LIBMESH_HAVE_OPENMP in > libmesh_config.h. (I have to manually comment those lines out because > configure will always enable OpenMP if it can, regardless of the thread > model specified). > I think this was by design, as we did not know about this helgrind limitation/use case. I think it should be possible to fix if in threads.m4 you wrap the call to AX_OPENMP([],[enableopenmp=no]) in some additional logic and add a corresponding AC_ARG_ENABLE([openmp], ...) call that would allow someone to explicitly say --disable-openmp at configure time. I will add a ticket for this so it doesn't get lost. -- John |
From: Alexander L. <ale...@gm...> - 2019-05-13 21:45:03
|
I could maybe think of a libmesh-only case, but if you're willing to run MOOSE, you can run the moose/test/tests/geomsearch/nearest_node_locator/nearest_node_locator.i input file. It appears to reproduce the error every time (with --n-threads=2). You're probably already aware of this, but for helgrind to be useful, you need to compile libMesh with --with-thread-model=pthread **and** you need to manually ensure that you don't define LIBMESH_HAVE_OPENMP in libmesh_config.h. (I have to manually comment those lines out because configure will always enable OpenMP if it can, regardless of the thread model specified). On Thu, May 9, 2019 at 8:59 PM Stogner, Roy H <roy...@ic...> wrote: > > On Thu, 9 May 2019, Alexander Lindsay wrote: > > > I'm getting the helgrind error below. Is this a false positive? > > My intuition says "no, this is a real bug", but I'm having trouble > figuring out what's really going on here. There's a race condition > between a read and a write to the same spot in the same PetscVector, > from two different threads working on the SortAndCopy::operator()? > But I don't immediately see how that's possible. That operator reads > from one PetscVector and writes to a different PetscVector. > > Can you set up a case that (at least usually) reproduces the problem? > > Thanks, > --- > Roy > |
From: Paul T. B. <ptb...@gm...> - 2019-05-13 16:13:03
|
Certainly that is a small mesh and shouldn't be a bottleneck compared to anything else going on. However, we'd need many more details about your setup to help diagnose the problem. I'll mention that for questions regarding performance, you'll want to make sure you're using METHOD=opt (and linking against libmesh_opt.so) which will turn off all the debugging checks. On Sun, May 12, 2019 at 8:46 AM Renato Poli <re...@gm...> wrote: > Thanks. > I will try different partitioners. > I moved away from the virtual environment, anyway. > > Rgds, > Renato > > On Sun, May 12, 2019 at 9:42 AM Kirk, Benjamin (JSC-EG311) < > ben...@na...> wrote: > > > There is some Hilbert space filling curve indexing that gets invoked with > > the default partitioner, and who knows in a virtual environment. Also > it’s > > worth trying a different partitioner. You can set the partitioner to any > > supported type, but I’ve not got the documentation handy at the moment. > > > > > > > > ------------------------------ > > On: 11 May 2019 21:09, "Renato Poli" <re...@gm...> wrote: > > > > Hi > > > > It seems that partitioning is taking a lot of time. > > If I skip it, it runs much faster. >> mesh.skip_partitioning(true); > > Does that make any sense? > > > > Renato > > > > > > On Sat, May 11, 2019 at 10:59 PM Renato Poli <re...@gm...> wrote: > > > >> Hi Kirk, > >> > >> I see there is something related to the parallelization. > >> I am using mpirun.mpich. > >> With a single processor, it runs much faster than with 4 processors. > >> Please find data below. > >> > >> Why parallel reading would be so slower? > >> Any suggestions? > >> > >> XDR - 1 processor > >> # Stopwatch "LibMesh::read": 12.7637 s > >> XDR - 4 processors > >> # Stopwatch "LibMesh::read": 135.473 s > >> EXO - 1 processor > >> # Stopwatch "LibMesh::read": 0.294671 s > >> EXO - 4 processrs > >> # Stopwatch "LibMesh::read": 198.897 s > >> > >> This is the mesh: > >> ====== > >> Mesh Information: > >> elem_dimensions()={2} > >> spatial_dimension()=2 > >> n_nodes()=40147 > >> n_local_nodes()=40147 > >> n_elem()=19328 > >> n_local_elem()=19328 > >> n_active_elem()=19328 > >> n_subdomains()=1 > >> n_partitions()=1 > >> n_processors()=1 > >> n_threads()=1 > >> processor_id()=0 > >> > >> > >> On Sat, May 11, 2019 at 7:15 PM Renato Poli <re...@gm...> wrote: > >> > >>> Thanks. > >>> I am currently running on a virtual machine - not sure mpi is getting > >>> along with that. > >>> I will try other approaches and bring more information if necessary. > >>> > >>> rgds, > >>> Renato > >>> > >>> On Sat, May 11, 2019 at 5:49 PM Kirk, Benjamin (JSC-EG311) < > >>> ben...@na...> wrote: > >>> > >>>> Definitely not right, but that seems like something in your machine or > >>>> filesystem. > >>>> > >>>> You can use the “meshtool-opt” command to convert it to XDR and try > >>>> that for comparison. We’ve got users who routinely read massive > meshes with > >>>> ExodusII, so I’m skeptical of a performance regression. > >>>> > >>>> -Ben > >>>> > >>>> > >>>> > >>>> > >>>> ------------------------------ > >>>> On: 11 May 2019 15:24, "Renato Poli" <re...@gm...> wrote: > >>>> > >>>> Hi > >>>> > >>>> I am reading in a mesh of 20.000 elements. > >>>> I am using Exodus format. > >>>> It takes up to 4 minutes. > >>>> Is that right? > >>>> How can I enhance performance? > >>>> > >>>> Thanks, > >>>> Renato > >>>> > >>>> _______________________________________________ > >>>> Libmesh-users mailing list > >>>> Libmesh- <Lib...@li...urce>us...@li... > >>>> < > https://urldefense.proofpoint.com/v2/url?u=http-3A__forge.net&d=DwMFaQ&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=vIvYF97KzwC0P9yVGHmF5v40OVXd4xvJtJNPxXxB7yU&s=xd3JWceEFFMudAJy6xr6KaQVBlwoV36Vg2s6R5w0R_k&e= > > > >>>> > >>>> > https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_libmesh-2Dusers&d=DwICAg&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=g9IqMQsNW7V7TetwCxIPRjeT6JqEPzvKRkxJgwL-sl8&s=LvY_-3DUqPYdv-F9qjoCV95-Em1ASG0AXQvo6KvU308&e= > >>>> > >>>> > > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Renato P. <re...@gm...> - 2019-05-12 12:45:50
|
Thanks. I will try different partitioners. I moved away from the virtual environment, anyway. Rgds, Renato On Sun, May 12, 2019 at 9:42 AM Kirk, Benjamin (JSC-EG311) < ben...@na...> wrote: > There is some Hilbert space filling curve indexing that gets invoked with > the default partitioner, and who knows in a virtual environment. Also it’s > worth trying a different partitioner. You can set the partitioner to any > supported type, but I’ve not got the documentation handy at the moment. > > > > ------------------------------ > On: 11 May 2019 21:09, "Renato Poli" <re...@gm...> wrote: > > Hi > > It seems that partitioning is taking a lot of time. > If I skip it, it runs much faster. >> mesh.skip_partitioning(true); > Does that make any sense? > > Renato > > > On Sat, May 11, 2019 at 10:59 PM Renato Poli <re...@gm...> wrote: > >> Hi Kirk, >> >> I see there is something related to the parallelization. >> I am using mpirun.mpich. >> With a single processor, it runs much faster than with 4 processors. >> Please find data below. >> >> Why parallel reading would be so slower? >> Any suggestions? >> >> XDR - 1 processor >> # Stopwatch "LibMesh::read": 12.7637 s >> XDR - 4 processors >> # Stopwatch "LibMesh::read": 135.473 s >> EXO - 1 processor >> # Stopwatch "LibMesh::read": 0.294671 s >> EXO - 4 processrs >> # Stopwatch "LibMesh::read": 198.897 s >> >> This is the mesh: >> ====== >> Mesh Information: >> elem_dimensions()={2} >> spatial_dimension()=2 >> n_nodes()=40147 >> n_local_nodes()=40147 >> n_elem()=19328 >> n_local_elem()=19328 >> n_active_elem()=19328 >> n_subdomains()=1 >> n_partitions()=1 >> n_processors()=1 >> n_threads()=1 >> processor_id()=0 >> >> >> On Sat, May 11, 2019 at 7:15 PM Renato Poli <re...@gm...> wrote: >> >>> Thanks. >>> I am currently running on a virtual machine - not sure mpi is getting >>> along with that. >>> I will try other approaches and bring more information if necessary. >>> >>> rgds, >>> Renato >>> >>> On Sat, May 11, 2019 at 5:49 PM Kirk, Benjamin (JSC-EG311) < >>> ben...@na...> wrote: >>> >>>> Definitely not right, but that seems like something in your machine or >>>> filesystem. >>>> >>>> You can use the “meshtool-opt” command to convert it to XDR and try >>>> that for comparison. We’ve got users who routinely read massive meshes with >>>> ExodusII, so I’m skeptical of a performance regression. >>>> >>>> -Ben >>>> >>>> >>>> >>>> >>>> ------------------------------ >>>> On: 11 May 2019 15:24, "Renato Poli" <re...@gm...> wrote: >>>> >>>> Hi >>>> >>>> I am reading in a mesh of 20.000 elements. >>>> I am using Exodus format. >>>> It takes up to 4 minutes. >>>> Is that right? >>>> How can I enhance performance? >>>> >>>> Thanks, >>>> Renato >>>> >>>> _______________________________________________ >>>> Libmesh-users mailing list >>>> Libmesh- <Lib...@li...urce>us...@li... >>>> <https://urldefense.proofpoint.com/v2/url?u=http-3A__forge.net&d=DwMFaQ&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=vIvYF97KzwC0P9yVGHmF5v40OVXd4xvJtJNPxXxB7yU&s=xd3JWceEFFMudAJy6xr6KaQVBlwoV36Vg2s6R5w0R_k&e=> >>>> >>>> https://urldefense.proofpoint.com/v2/url?u=https-3A__lists.sourceforge.net_lists_listinfo_libmesh-2Dusers&d=DwICAg&c=ApwzowJNAKKw3xye91w7BE1XMRKi2LN9kiMk5Csz9Zk&r=cL6XMjnReoBDeWspbxyIhOHmg_O4uY2LnJOBmaiGPkI&m=g9IqMQsNW7V7TetwCxIPRjeT6JqEPzvKRkxJgwL-sl8&s=LvY_-3DUqPYdv-F9qjoCV95-Em1ASG0AXQvo6KvU308&e= >>>> >>>> |