You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: Manav B. <bha...@gm...> - 2018-04-03 17:21:40
|
The Computational Dynamics and Design Lab (http://bhatia.ae.msstate.edu) at Mississippi State University has an immediate opening for a post-doctoral researcher. The work involves development of advanced finite element schemes for analysis and design of structures with plasticity and high strain-rate mechanics. Finite element analysis will involve high-order discretization with the development of adjoint-based error-metrics to drive hp-adaptivity for large-scale systems, and efficient adjoint-sensitivity analysis techniques for application in design optimization. Applications are sought from highly motivated researchers with a PhD in Engineering/Physics/Applied Mathematics with strong mathematical and programming skills. A successful applicant will have: — Prior programming experience for finite element analysis, preferably with MPI-enabled parallel computing. — Proficiency with C/C++ programing. — Exposure to open-source libraries for scientific computations, such as libMesh, Deal.II, PETSc, SLEPc, etc. — Strong writing skills and experience with LaTeX. Interested candidates can email the following items to Prof. Manav Bhatia (Bh...@ae...): — CV including the contact information of references, — representative publications. Thanks, Manav |
From: Paul T. B. <ptb...@gm...> - 2018-04-03 13:05:03
|
Hello, You can get the spatial locations of the quadrature points from your finite element object: http://libmesh.github.io/doxygen/classlibMesh_1_1FEAbstract.html#a6c196ce8080e5082ca424aadde821fa3 Please let us know if you have more questions. Best, Paul On Mon, Apr 2, 2018 at 11:08 PM, Xie_Jinyi <oph...@qq...> wrote: > Hi all, > > > I have been dealing with a problem where I need to evaluate a specific > function on the quadrature points to assemble the right hand side, so I > need to get the physical space locations of the quadrature points. I guess > it is related to "get_element_qrule" and I wonder if there is a function > which can access the spatial coordinates. Could anyone give me a hint of > it? Your suggestions would be appreciated. > > > Best regards, > Ophélie > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: David K. <dav...@ak...> - 2018-04-03 12:36:21
|
On Tue, Apr 3, 2018 at 4:58 AM, <ss...@pu...> wrote: > Thank you for your reply, David. > > > > I understood the answer as follows: ThetaA1 and ThetaA2 can be neglected > in calculating the coercivity lower bound due to the divergence-free > convection field. > > However, I am not familiar with the field of thermal fluid engineering, so > I do not know in detail why it can be ignored. > > > > Could you tell me more about this problem with formulations of the lower > bound and the divergence-free convection field? > > Or can you tell me about references related to this problem? > > > > I always appreciate your help. > This follows from the definition of the coercivity constant. Set trial and test function to be the same in the convection-diffusion bilinear form, and integrate the convection term by parts and you end up with a term that includes the divergence of the velocity field. If the velocity field is divergence-free, then that term vanishes. David *From:* David Knezevic <dav...@ak...> *Sent:* Monday, April 2, 2018 8:34 PM *To:* 강신성 <ss...@pu...> *Cc:* libmesh-users <lib...@li...> *Subject:* Re: [Libmesh-users] [RB] Min-theta approach On Sun, Apr 1, 2018 at 10:26 PM, <ss...@pu...> wrote: Hello, all. I try to derive an RB error bound using the min-theta approach. First of all, I saw the RB example 1 because this example shows the value of coercivity constant lower bound, not dummy value. However, this example does not satisfy requirements of the min-theta approach. If so, how was the lower bound value of the example 1 obtained? I do not know why the lower bound value is 0.05 in the RB example 1. If I recall correctly, the ThetaA1 and ThetaA2 are irrelevant to the coercivity constant because they give a divergence-free convection field (it's clearly divergence free since the field is constant everywhere, given by the parameters x_vel and y_vel). As a result we can set the coercivty lower bound to be the value returned by ThetaA0, which is 0.05. Of course, this is a simple case, and in general one must use SCM to get the coercivity lower bound. I would say that in practice a lot of people are satisfied with skipping the SCM and just setting a dummy value (e.g. 1) for the coercivity lower bound. This means that the error bound isn't rigorous anymore, but it's still useful as an error indicator. David |
From: <ss...@pu...> - 2018-04-03 08:58:39
|
Thank you for your reply, David. I understood the answer as follows: ThetaA1 and ThetaA2 can be neglected in calculating the coercivity lower bound due to the divergence-free convection field. However, I am not familiar with the field of thermal fluid engineering, so I do not know in detail why it can be ignored. Could you tell me more about this problem with formulations of the lower bound and the divergence-free convection field? Or can you tell me about references related to this problem? I always appreciate your help. Regards, SKang From: David Knezevic <dav...@ak...> Sent: Monday, April 2, 2018 8:34 PM To: 강신성 <ss...@pu...> Cc: libmesh-users <lib...@li...> Subject: Re: [Libmesh-users] [RB] Min-theta approach On Sun, Apr 1, 2018 at 10:26 PM, <ss...@pu... <mailto:ss...@pu...> > wrote: Hello, all. I try to derive an RB error bound using the min-theta approach. First of all, I saw the RB example 1 because this example shows the value of coercivity constant lower bound, not dummy value. However, this example does not satisfy requirements of the min-theta approach. If so, how was the lower bound value of the example 1 obtained? I do not know why the lower bound value is 0.05 in the RB example 1. If I recall correctly, the ThetaA1 and ThetaA2 are irrelevant to the coercivity constant because they give a divergence-free convection field (it's clearly divergence free since the field is constant everywhere, given by the parameters x_vel and y_vel). As a result we can set the coercivty lower bound to be the value returned by ThetaA0, which is 0.05. Of course, this is a simple case, and in general one must use SCM to get the coercivity lower bound. I would say that in practice a lot of people are satisfied with skipping the SCM and just setting a dummy value (e.g. 1) for the coercivity lower bound. This means that the error bound isn't rigorous anymore, but it's still useful as an error indicator. David |
From: <oph...@qq...> - 2018-04-03 03:24:23
|
Hi all, I have been dealing with a problem where I need to evaluate a specific function on the quadrature points to assemble the right hand side, so I need to get the physical space locations of the quadrature points. I guess it is related to "get_element_qrule" and I wonder if there is a function which can access the spatial coordinates. Could anyone give me a hint of it? Your suggestions would be appreciated. Best regards, Ophélie |
From: David K. <dav...@ak...> - 2018-04-02 11:55:40
|
On Sun, Apr 1, 2018 at 10:26 PM, <ss...@pu...> wrote: > Hello, all. > > > > I try to derive an RB error bound using the min-theta approach. > > First of all, I saw the RB example 1 because this example shows the value > of > coercivity constant lower bound, not dummy value. > > However, this example does not satisfy requirements of the min-theta > approach. > > > > If so, how was the lower bound value of the example 1 obtained? > > I do not know why the lower bound value is 0.05 in the RB example 1. > If I recall correctly, the ThetaA1 and ThetaA2 are irrelevant to the coercivity constant because they give a divergence-free convection field (it's clearly divergence free since the field is constant everywhere, given by the parameters x_vel and y_vel). As a result we can set the coercivty lower bound to be the value returned by ThetaA0, which is 0.05. Of course, this is a simple case, and in general one must use SCM to get the coercivity lower bound. I would say that in practice a lot of people are satisfied with skipping the SCM and just setting a dummy value (e.g. 1) for the coercivity lower bound. This means that the error bound isn't rigorous anymore, but it's still useful as an error indicator. David |
From: <ss...@pu...> - 2018-04-02 02:53:28
|
Hello, all. I try to derive an RB error bound using the min-theta approach. First of all, I saw the RB example 1 because this example shows the value of coercivity constant lower bound, not dummy value. However, this example does not satisfy requirements of the min-theta approach. If so, how was the lower bound value of the example 1 obtained? I do not know why the lower bound value is 0.05 in the RB example 1. I look forward to your reply. Thank you. Regards, SKang |
From: Roy S. <roy...@ic...> - 2018-03-28 15:29:49
|
On Wed, 28 Mar 2018, Jorge Lopes wrote: > \nabla V = Source(\phi) > The source depends on the function \phi. The more complicated math probably makes the software questions easier: even in the implicit integration case you'll probably want to solve as a fully coupled system and then consider decoupling in the solver, rather than setting up two independent systems and iterating between them. The fem_system (if you want to use it) and systems_of_equations examples may be good places to start. > I'd say we start with explicit to see if it runs and simulates > smoothly and then, if there are numerical problems we can switch to > implicit. > We have some freedom here, at this point either is fine. I'm afraid you may have performance problems to worry about too, in the explicit case: libMesh users originally all used implicit time stepping, which meant there wasn't much attention placed on optimizing any per-time-step costs which are negligable compared to a linear solve, which means that if you're trying to avoid linear solves by doing lots more time steps you may see those costs show up... which tends to push libMesh users toward implicit rather than explicit time stepping whenever both are reasonable options, which perpetuates the cycle. So if you *do* want to start with explicit time stepping, keep us in the loop. There are may be opportunities for library optimization that we just haven't bothered with because the developers who could make those improvements don't have great benchmark codes with which to work on the problem. --- Roy |
From: Jorge L. <jor...@gm...> - 2018-03-28 10:56:23
|
Thank you for the reply, 2018-03-26 23:07 GMT+01:00 Roy Stogner <roy...@ic...>: > > On Thu, 22 Mar 2018, Jorge Lopes wrote: > > 1) Understanding the structure of the documentation: It is highly >> non-trivial to browse through that. >> > > It is; it's a half-decent reference but not a tutorial. > > The closest thing we have to a tutorial is the examples, and even > there you have to skim descriptions and skip over the ones > focused on features you don't care about. > > I'm going to describe my problem and ask you to direct me to several >> relevant structures for me to get familiar with. >> >> 2) The problem I want to tackle is a combination of the Poisson equation >> and the Heat Equation. >> For a given time step we solve >> \nabla V = Source >> > \nabla V = Source(\phi) The source depends on the function \phi. > and use this to evolve to evolve the Heat-like equation >> i \partial \phi / \partial t = f(\phi, \phi', \phi'') + V \phi >> where \phi is a complex function, naturally. > > >> 3) I noted that libmesh has the 2 problems solved separately in >> examples/introduction/introduction_ex4 and somewhere in fem_systems. If >> this is useful, I'd have my problem half solved. >> >> All I'm looking for is some guidelines on how to start tackling the >> problem >> with my moderate knowledge of C++ and enormous difficulty in browsing >> through the documentation. >> > > Two questions about your problem: > > Do you want to solve it decoupled (e.g. if Source is independent of > phi, so you can get an exact value of V at each time step before > solving for phi) or (weakly or fully?) coupled? > \nabla V = Source(\phi) The source depends on the function \phi. Sorry for not being explicit about this. > > Do you want to integrate with implicit time stepping, explicit, or > both? > I'd say we start with explicit to see if it runs and simulates smoothly and then, if there are numerical problems we can switch to implicit. We have some freedom here, at this point either is fine. > --- > Roy > Thank you , Hellium0 |
From: Roy S. <roy...@ic...> - 2018-03-27 21:15:48
|
On Tue, 27 Mar 2018, Salazar De Troya, Miguel wrote: > Can I generate the cpa from the cpr without having to save the mesh again? No; if that's not reasonable just upload the CPR for me? > >> It's 123 MB. > > > >I think my email system will bounce that, so if you could put it up > >for HTTP/FTP somewhere I'd appreciate it. If not let me know and I'll > >set you up with a temporary account for an FTP upload. > > > >> None of those cases applies to me. I am using a regular 3D mesh > >> generated with 3 cubes using MeshTools::Generation::build_cube() and > >> stitching them together. I am going to make sure there is not a > >> misalignment when stitching those meshes together. The elements are > >> simply linear lagrange. I can send you the code that generates the > >> mesh as well. > > > >The mesh is dying after some kind of a posteriori adaptivity, isn't > >it? In which case I'd need the whole simulation code to hit the same > >error? In that case just getting the generated initial mesh wouldn't > >be helpful. > > > >If I've misunderstood you and the very first mesh is throwing an > >assert for you, though, then it would be very helpful to have the mesh > >generation code! > > Sending the whole code might be difficult. The mesh that I have > saved is the one generated after the a posteriori adaptivity. OK; just the mesh alone might be enough to work from, then. --- Roy |
From: Salazar De T. M. <sal...@ll...> - 2018-03-27 19:30:41
|
On 3/26/18, 3:12 PM, "Roy Stogner" <roy...@ic...> wrote: On Thu, 22 Mar 2018, Salazar De Troya, Miguel wrote: >> I've got the mesh in cpr right before calling >> make_node_proc_ids_parallel_consistent(), do you have a preferred >> way to share it to you? > >Could you put it up in cpa (compressed if need be) format? In the >most heinous cases I find myself staring at mesh files in an editor... Can I generate the cpa from the cpr without having to save the mesh again? > >> It's 123 MB. > >I think my email system will bounce that, so if you could put it up >for HTTP/FTP somewhere I'd appreciate it. If not let me know and I'll >set you up with a temporary account for an FTP upload. > >> None of those cases applies to me. I am using a regular 3D mesh >> generated with 3 cubes using MeshTools::Generation::build_cube() and >> stitching them together. I am going to make sure there is not a >> misalignment when stitching those meshes together. The elements are >> simply linear lagrange. I can send you the code that generates the >> mesh as well. > >The mesh is dying after some kind of a posteriori adaptivity, isn't >it? In which case I'd need the whole simulation code to hit the same >error? In that case just getting the generated initial mesh wouldn't >be helpful. > >If I've misunderstood you and the very first mesh is throwing an >assert for you, though, then it would be very helpful to have the mesh >generation code! Sending the whole code might be difficult. The mesh that I have saved is the one generated after the a posteriori adaptivity. Miguel -- On 3/26/18, 3:12 PM, "Roy Stogner" <roy...@ic...> wrote: On Thu, 22 Mar 2018, Salazar De Troya, Miguel wrote: > I've got the mesh in cpr right before calling > make_node_proc_ids_parallel_consistent(), do you have a preferred > way to share it to you? Could you put it up in cpa (compressed if need be) format? In the most heinous cases I find myself staring at mesh files in an editor... > It's 123 MB. I think my email system will bounce that, so if you could put it up for HTTP/FTP somewhere I'd appreciate it. If not let me know and I'll set you up with a temporary account for an FTP upload. > None of those cases applies to me. I am using a regular 3D mesh > generated with 3 cubes using MeshTools::Generation::build_cube() and > stitching them together. I am going to make sure there is not a > misalignment when stitching those meshes together. The elements are > simply linear lagrange. I can send you the code that generates the > mesh as well. The mesh is dying after some kind of a posteriori adaptivity, isn't it? In which case I'd need the whole simulation code to hit the same error? In that case just getting the generated initial mesh wouldn't be helpful. If I've misunderstood you and the very first mesh is throwing an assert for you, though, then it would be very helpful to have the mesh generation code! --- Roy |
From: Roy S. <roy...@ic...> - 2018-03-26 22:12:33
|
On Thu, 22 Mar 2018, Salazar De Troya, Miguel wrote: > I've got the mesh in cpr right before calling > make_node_proc_ids_parallel_consistent(), do you have a preferred > way to share it to you? Could you put it up in cpa (compressed if need be) format? In the most heinous cases I find myself staring at mesh files in an editor... > It's 123 MB. I think my email system will bounce that, so if you could put it up for HTTP/FTP somewhere I'd appreciate it. If not let me know and I'll set you up with a temporary account for an FTP upload. > None of those cases applies to me. I am using a regular 3D mesh > generated with 3 cubes using MeshTools::Generation::build_cube() and > stitching them together. I am going to make sure there is not a > misalignment when stitching those meshes together. The elements are > simply linear lagrange. I can send you the code that generates the > mesh as well. The mesh is dying after some kind of a posteriori adaptivity, isn't it? In which case I'd need the whole simulation code to hit the same error? In that case just getting the generated initial mesh wouldn't be helpful. If I've misunderstood you and the very first mesh is throwing an assert for you, though, then it would be very helpful to have the mesh generation code! --- Roy |
From: Roy S. <roy...@ic...> - 2018-03-26 22:08:52
|
On Fri, 23 Mar 2018, Salazar De Troya, Miguel wrote: > Are the SCALAR variables added to the local vectors for each processor? As ghosted dof coefficients? Yes. The only exception is that, if you have a processor which doesn't own any elements on which the SCALAR is defined (either because the processor doesn't own *anything* at the moment or because you have a subdomain-restricted variable and the processor doesn't own any elements in the subdomain(s) it's restricted to), it won't get a copy of the SCALAR values either. --- Roy |
From: Roy S. <roy...@ic...> - 2018-03-26 22:07:08
|
On Thu, 22 Mar 2018, Jorge Lopes wrote: > 1) Understanding the structure of the documentation: It is highly > non-trivial to browse through that. It is; it's a half-decent reference but not a tutorial. The closest thing we have to a tutorial is the examples, and even there you have to skim descriptions and skip over the ones focused on features you don't care about. > I'm going to describe my problem and ask you to direct me to several > relevant structures for me to get familiar with. > > 2) The problem I want to tackle is a combination of the Poisson equation > and the Heat Equation. > For a given time step we solve > \nabla V = Source > and use this to evolve to evolve the Heat-like equation > i \partial \phi / \partial t = f(\phi, \phi', \phi'') + V \phi > where \phi is a complex function, naturally. > > 3) I noted that libmesh has the 2 problems solved separately in > examples/introduction/introduction_ex4 and somewhere in fem_systems. If > this is useful, I'd have my problem half solved. > > All I'm looking for is some guidelines on how to start tackling the problem > with my moderate knowledge of C++ and enormous difficulty in browsing > through the documentation. Two questions about your problem: Do you want to solve it decoupled (e.g. if Source is independent of phi, so you can get an exact value of V at each time step before solving for phi) or (weakly or fully?) coupled? Do you want to integrate with implicit time stepping, explicit, or both? --- Roy |
From: David K. <dav...@ak...> - 2018-03-26 12:45:45
|
On Mon, Mar 26, 2018 at 1:07 AM, Gauvain Wu <cau...@gm...> wrote: > Hi all, > I was running the offline stage of a reduced basis model. At the > beginning, the maximum error bound decreased at a normal rate and the value > appeared acceptable (around 20000). However, when the basis dimension > increased to 6, the maximum error bound climbed drastically to e+149 or so > and didn't decrease any more. What I had modified was the range of the > parameters which is listed below: > beta_T0 = '1.e3 3.e3' # min, max values > rho_c = '2.4 4.5' > heat_flux = '20 30' > h = '0 1' > h_(Tinf-T0)='0 1' > G = '7.e4 9.e4' > lambda = '2.e5 4.e5' > beta = '5 15' > k = '10 20' > The code was run with option mpirun -np 4 ./Thermoelasticity-opt > -online_mode 0 -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package > mumps. Could anyone give a hint of what causes this problem? Thanks in > advance. > I don't know what the problem is, but note that you have 9 parameters here which is a lot of parameters for an RB model. It may not be practical to use this many parameters in your case. I would suggest starting with 1 parameter (set the other parameters to fixed values) and make sure that works, then go to 2 parameters, etc. That may help you to figure out what the issue is as well. David > > via Newton Mail [https://cloudmagic.com/k/d/ma > ilapp?ct=dw&cv=9.8.284&pv=10.0&source=email_footer_2] > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Gauvain Wu <cau...@gm...> - 2018-03-26 05:08:17
|
Hi all, I was running the offline stage of a reduced basis model. At the beginning, the maximum error bound decreased at a normal rate and the value appeared acceptable (around 20000). However, when the basis dimension increased to 6, the maximum error bound climbed drastically to e+149 or so and didn't decrease any more. What I had modified was the range of the parameters which is listed below: beta_T0 = '1.e3 3.e3' # min, max values rho_c = '2.4 4.5' heat_flux = '20 30' h = '0 1' h_(Tinf-T0)='0 1' G = '7.e4 9.e4' lambda = '2.e5 4.e5' beta = '5 15' k = '10 20' The code was run with option mpirun -np 4 ./Thermoelasticity-opt -online_mode 0 -ksp_type preonly -pc_type lu -pc_factor_mat_solver_package mumps. Could anyone give a hint of what causes this problem? Thanks in advance. Gauvain via Newton Mail [https://cloudmagic.com/k/d/mailapp?ct=dw&cv=9.8.284&pv=10.0&source=email_footer_2] |
From: Salazar De T. M. <sal...@ll...> - 2018-03-23 17:30:02
|
Are the SCALAR variables added to the local vectors for each processor? Miguel -- On 3/20/18, 11:52 AM, "Roy Stogner" <roy...@ic...> wrote: On Tue, 20 Mar 2018, David Knezevic wrote: > IIRC we stick them at the end, so typically the last (i.e. (rank-1)^th) > processor will own all the SCALARs. This is correct, but please try to avoid writing codes depending on it; it's one of the upcoming items on our "Stuff I thought would be fine because small N but which I need to change because INL Ns are huge" chopping block list (where in this case N is the number of SCALAR dofs in a problem). --- Roy |
From: Salazar De T. M. <sal...@ll...> - 2018-03-22 18:30:47
|
Hi, I've got the mesh in cpr right before calling make_node_proc_ids_parallel_consistent(), do you have a preferred way to share it to you? It's 123 MB. None of those cases applies to me. I am using a regular 3D mesh generated with 3 cubes using MeshTools::Generation::build_cube() and stitching them together. I am going to make sure there is not a misalignment when stitching those meshes together. The elements are simply linear lagrange. I can send you the code that generates the mesh as well. Miguel -- On 3/20/18, 7:50 AM, "Roy Stogner" <roy...@ic...> wrote: On Tue, 20 Mar 2018, Salazar De Troya, Miguel wrote: > this->make_node_proc_ids_parallel_consistent(mesh) did not work. I get the exact same error. Is this strange? From the outside view, "Is it strange that extremely complicated parallel code isn't working perfectly" is sad but not strange at all. From the inside view, it seems exceptionally strange to me. We sync all nodes' processor ids (albeit by element id with local node id, since we don't yet have consistent global node ids at this point in the code), and then when we check to see if all nodes' processor ids are in sync, they aren't? At this point I'd be thrilled if you could send me code with which to try and replicate the problem, or even if you could just output a CheckpointIO file set right before the make_node_proc_ids_parallel_consistent() call and send me that. But for now, let's see what I can figure out at arms' length. I only know of situations where a sync by element id with local node id wouldn't be sufficient. Let me know if either apply to you? 1. If you have a node N which isn't connected to any elements. (This can be fixed by using a NodeElem in your Mesh) 2. If you have a node N which is connected to two elements, let's call them E1 and E5, such that no chain of neighbor links like "E1->E2->E3->E4->E5" (with any number of intermediate elements) exists for which all intermediate elements are connected to N. Imagine two squares connected only at one corner - we don't support such a domain with DistributedMesh unless you add a complicated GhostingFunctor to handle the one-point-only connection. --- Roy |
From: Jorge L. <jor...@gm...> - 2018-03-22 13:43:26
|
Hi everyone, So I am trying to start using libmesh and I have quite some questions that might be easy to be given suggestions by more experienced people. 1) Understanding the structure of the documentation: It is highly non-trivial to browse through that. I'm going to describe my problem and ask you to direct me to several relevant structures for me to get familiar with. 2) The problem I want to tackle is a combination of the Poisson equation and the Heat Equation. For a given time step we solve \nabla V = Source and use this to evolve to evolve the Heat-like equation i \partial \phi / \partial t = f(\phi, \phi', \phi'') + V \phi where \phi is a complex function, naturally. 3) I noted that libmesh has the 2 problems solved separately in examples/introduction/introduction_ex4 and somewhere in fem_systems. If this is useful, I'd have my problem half solved. All I'm looking for is some guidelines on how to start tackling the problem with my moderate knowledge of C++ and enormous difficulty in browsing through the documentation. Thank you all for your help in advance, Hellium0 |
From: Roy S. <roy...@ic...> - 2018-03-20 18:51:54
|
On Tue, 20 Mar 2018, David Knezevic wrote: > IIRC we stick them at the end, so typically the last (i.e. (rank-1)^th) > processor will own all the SCALARs. This is correct, but please try to avoid writing codes depending on it; it's one of the upcoming items on our "Stuff I thought would be fine because small N but which I need to change because INL Ns are huge" chopping block list (where in this case N is the number of SCALAR dofs in a problem). --- Roy |
From: David K. <dav...@ak...> - 2018-03-20 18:38:53
|
On Tue, Mar 20, 2018 at 1:24 PM, Salazar De Troya, Miguel < sal...@ll...> wrote: > Seeing the systems_of_equations_ex5 example: > > * Why is a SCALAR variable of order FIRST and not CONSTANT? I > understand it is a variable that takes the same value over the whole domain. > The order of a SCALAR corresponds to how many extra dofs we add to the system, so FIRST --> 1 extra dof, SECOND --> 2 extra dofs, etc. The extra dofs are typically used to impose constraints, as you probably know, so sometimes it's useful to choose a higher order SCALAR if you have more than one constraint to impose. We could have restricted SCALAR to be CONSTANT, as you suggested, but then you'd have to add extra SCALAR variables each time you want another constraint, rather than just increasing the SCALAR order. > * How are SCALAR variables partitioned and to which processor are they > sent? > IIRC we stick them at the end, so typically the last (i.e. (rank-1)^th) processor will own all the SCALARs. David |
From: Salazar De T. M. <sal...@ll...> - 2018-03-20 17:24:37
|
Seeing the systems_of_equations_ex5 example: * Why is a SCALAR variable of order FIRST and not CONSTANT? I understand it is a variable that takes the same value over the whole domain. * How are SCALAR variables partitioned and to which processor are they sent? Thanks Miguel -- |
From: Roy S. <roy...@ic...> - 2018-03-20 14:50:42
|
On Tue, 20 Mar 2018, Salazar De Troya, Miguel wrote: > this->make_node_proc_ids_parallel_consistent(mesh) did not work. I get the exact same error. Is this strange? >From the outside view, "Is it strange that extremely complicated parallel code isn't working perfectly" is sad but not strange at all. >From the inside view, it seems exceptionally strange to me. We sync all nodes' processor ids (albeit by element id with local node id, since we don't yet have consistent global node ids at this point in the code), and then when we check to see if all nodes' processor ids are in sync, they aren't? At this point I'd be thrilled if you could send me code with which to try and replicate the problem, or even if you could just output a CheckpointIO file set right before the make_node_proc_ids_parallel_consistent() call and send me that. But for now, let's see what I can figure out at arms' length. I only know of situations where a sync by element id with local node id wouldn't be sufficient. Let me know if either apply to you? 1. If you have a node N which isn't connected to any elements. (This can be fixed by using a NodeElem in your Mesh) 2. If you have a node N which is connected to two elements, let's call them E1 and E5, such that no chain of neighbor links like "E1->E2->E3->E4->E5" (with any number of intermediate elements) exists for which all intermediate elements are connected to N. Imagine two squares connected only at one corner - we don't support such a domain with DistributedMesh unless you add a complicated GhostingFunctor to handle the one-point-only connection. --- Roy |
From: Salazar De T. M. <sal...@ll...> - 2018-03-20 14:12:57
|
this->make_node_proc_ids_parallel_consistent(mesh) did not work. I get the exact same error. Is this strange? Miguel ________________________________ From: Roy Stogner <roy...@ic...> Sent: Monday, March 19, 2018 12:28:26 PM To: Salazar De Troya, Miguel Cc: lib...@li... Subject: Re: [Libmesh-users] Assertion `min_id == node->processor_id()' failed.; On Mon, 19 Mar 2018, Salazar De Troya, Miguel wrote: > I found a slight difference between the trace files: > > The traceout_8_142118.txt contains > > libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 > > whereas traceout_57_85461.txt and traceout_11_104555.txt : > > libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1609 > > Not sure if this helps. No; I'm afraid that's expected from that stack trace: processors who think the node should be on processor 57 are screaming that 57 doesn't match the minimum proc_id of 11, but processors who think it should be on processor 11 are screaming that 11 doesn't match the maximum proc_id of 57. > #7 0x00002aaaaebe174e in libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 > #8 0x00002aaaaeba931e in libMesh::MeshTools::correct_node_proc_ids (mesh=...) at src/mesh/mesh_tools.C:1844 > #9 0x00002aaaae69a0ce in libMesh::MeshCommunication::make_new_nodes_parallel_consistent (this=0x2320a, mesh=...) at src/mesh/mesh_communication.C:1776 > #10 0x00002aaaaea95919 in libMesh::MeshRefinement::_refine_elements (this=0x2320a) at src/mesh/mesh_refinement.C:1601 > #11 0x00002aaaaea6a4d1 in libMesh::MeshRefinement::refine_and_coarsen_elements (this=0x2320a) at src/mesh/mesh_refinement.C:578 > #12 0x00002aaab9d69dcd in OptiProblem::solve (this=0x7fffffffabd8) at /g/g92/miguel/code/topsm/src/opti_problem.C:370 > #13 0x00000000004371b8 in main (argc=4, argv=0x7fffffffb798) at /g/g92/miguel/code/topsm/test/3D_stress_constraint/linear_stress_opti.C:196 > > Are there other things I can do to debug this? One possible fix you could try first: in mesh_communication.C:1767, where it says this->make_new_node_proc_ids_parallel_consistent(mesh); Try changing it to this->make_node_proc_ids_parallel_consistent(mesh); It could be that you're in some corner case I didn't imagine, which causes a processor to fail to identify and correct a new potentially-inconsistent processor_id, and if so then maybe telling the code to sync up *all* node processor_id() values will fix that. Let me know whether or not that works? This is a frighteningly tricky part of the code; you can gawk at the current state of my failed attempts to improve load balancing of processor ids in https://github.com/libMesh/libmesh/pull/1621 in fact. The good news about that PR is it has me digging into corner cases here myself, so hopefully when I'm finished it will fix your code too if my suggested fix above doesn't. The bad news is that there's also a chance of me immediately re-*breaking* your code even if my suggested fix above works - if you wouldn't mind, I'll let you know when the PR is ready so you can run your own tests, just in case they catch something that our own CI misses. --- Roy |
From: Roy S. <roy...@ic...> - 2018-03-20 13:49:28
|
On Tue, 20 Mar 2018, David Knezevic wrote: > No, you should not have to redefine anything. If you're using Trelis, then > define the subdomains by defining "blocks" in an ExodusII mesh, and they > will be read into libMesh and stored as subdomain_ids. You should not have > to change node numbers. In general your code should not care what the node > numbering is. Although I usually strongly agree with that last statement, I should point out that there are people who disagree, and that at least 1% of the time they're correct, so we do have some facility in libMesh for accommodating such codes. If you set mesh.allow_renumbering(false), *before* calling mesh.read(whatever), then we'll respect the numbering in the mesh file, even after adaptive refinement, at what I believe is only a slight performance hit. --- Roy |