You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: David K. <dav...@ak...> - 2018-03-20 11:53:49
|
> > I have one more question. If I generate subdomains and a mesh using a > commercial program, e.g., Trelis, do I need to redefine node numbers and > subdomains in a libMesh code? > > If so, I think it is almost impossible. > No, you should not have to redefine anything. If you're using Trelis, then define the subdomains by defining "blocks" in an ExodusII mesh, and they will be read into libMesh and stored as subdomain_ids. You should not have to change node numbers. In general your code should not care what the node numbering is. David ----- Original Message ----- >From : David Knezevic <dav...@ak...> To : "SKang" <ss...@pu...> Cc : "Libmesh user group" <lib...@li...> Sent : 2018-03-19 21:13:20 Subject : Re: [Libmesh-users] [RB] How to obtain subdomain information from nodes On Mon, Mar 19, 2018 at 5:19 AM, SKang <ss...@pu...> wrote: > Hello, all.I try to solve an RB problem similar to the RB example 5.Iused > multiple domains with lengthscaling factors.So I need to plot themeshto > match the scaling given by the input value to each subdomain.For the > libmesh code to plot, I will refer to "scale_mesh_and_plot" in the RB > Example 5.However, I don't know how to obtain subdomain information from > nodes.So I want to ask two questions.1. Please let me know if there is a > similar code to the following code for a node.------------------------- > ---------------------------------------------------------------Elem * > elem = &c.get_elem();if(elem -> subdomain_id() == > AREA_ID1)--------------------------------------------------- > --------------------------------------2. If not, I want to know another > way to obtain subdomain information from nodes.Thank you.Best regards,Kang > In libMesh subdomains are for elements, not for nodes. If you want to associate a subdomain ID with nodes, you could define a map from node ID to subdomain ID, and then loop over the elements of the mesh to get the subdomain ID, and then loop over the nodes of each element and fill in the map. Of course, the subdomains in this approach are not unique since elements with different subdomain IDs can link to the same node, so with this approach you would store the subdomain ID for one of the elements that touches each node. David |
From: SKang <ss...@pu...> - 2018-03-20 08:32:05
|
Thank you for your reply, David, even though the mail was written strangely.The mail was written normally in my mail platform butI don't know why it happened.I have one more question. If I generate subdomains and a mesh using a commercial program, e.g., Trelis, do I need to redefine node numbers and subdomains in a libMesh code?If so, I think it is almostimpossible.I look forward your response.Thank you.Best regards,Kang ------------------------------------------------------------ ShinseongKang GraduateStudent PusanNationalUniversity H.P.:010-9770-6595 E-mail:ss...@pu... ------------------------------------------------------------ ----- Original Message -----From : David Knezevic <dav...@ak...>To : "SKang" <ss...@pu...>Cc : "Libmesh user group" <lib...@li...>Sent : 2018-03-19 21:13:20Subject : Re: [Libmesh-users] [RB] How to obtain subdomain information from nodesOn Mon, Mar 19, 2018 at 5:19 AM, SKang <ss...@pu...> wrote:Hello, all.I try to solve an RB problem similar to the RB example 5.Iused multiple domains with lengthscaling factors.So I need to plot themeshto match the scaling given by the input value to each subdomain.For the libmesh code to plot, I will refer to "scale_mesh_and_plot" in the RB Example 5.However, I don't know how to obtain subdomain information from nodes.So I want to ask two questions.1. Please let me know if there is a similar code to the following code for a node.-------------------------<wbr>------------------------------<wbr>--------2. If not, I want to know another way to obtain subdomain information from nodes.Thank you.Best regards,KangIn libMesh subdomains are for elements, not for nodes.If you want to associate a subdomain ID with nodes, you could define a map from node ID to subdomain ID, and then loop over the elements of the mesh to get the subdomain ID, and then loop over the nodes of each element and fill in the map. Of course, the subdomains in this approach are not unique since elements with different subdomain IDs can link to the same node, so with this approach you would store the subdomain ID for one of the elements that touches each node.David |
From: Roy S. <roy...@ic...> - 2018-03-19 19:28:34
|
On Mon, 19 Mar 2018, Salazar De Troya, Miguel wrote: > I found a slight difference between the trace files: > > The traceout_8_142118.txt contains > > libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 > > whereas traceout_57_85461.txt and traceout_11_104555.txt : > > libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1609 > > Not sure if this helps. No; I'm afraid that's expected from that stack trace: processors who think the node should be on processor 57 are screaming that 57 doesn't match the minimum proc_id of 11, but processors who think it should be on processor 11 are screaming that 11 doesn't match the maximum proc_id of 57. > #7 0x00002aaaaebe174e in libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 > #8 0x00002aaaaeba931e in libMesh::MeshTools::correct_node_proc_ids (mesh=...) at src/mesh/mesh_tools.C:1844 > #9 0x00002aaaae69a0ce in libMesh::MeshCommunication::make_new_nodes_parallel_consistent (this=0x2320a, mesh=...) at src/mesh/mesh_communication.C:1776 > #10 0x00002aaaaea95919 in libMesh::MeshRefinement::_refine_elements (this=0x2320a) at src/mesh/mesh_refinement.C:1601 > #11 0x00002aaaaea6a4d1 in libMesh::MeshRefinement::refine_and_coarsen_elements (this=0x2320a) at src/mesh/mesh_refinement.C:578 > #12 0x00002aaab9d69dcd in OptiProblem::solve (this=0x7fffffffabd8) at /g/g92/miguel/code/topsm/src/opti_problem.C:370 > #13 0x00000000004371b8 in main (argc=4, argv=0x7fffffffb798) at /g/g92/miguel/code/topsm/test/3D_stress_constraint/linear_stress_opti.C:196 > > Are there other things I can do to debug this? One possible fix you could try first: in mesh_communication.C:1767, where it says this->make_new_node_proc_ids_parallel_consistent(mesh); Try changing it to this->make_node_proc_ids_parallel_consistent(mesh); It could be that you're in some corner case I didn't imagine, which causes a processor to fail to identify and correct a new potentially-inconsistent processor_id, and if so then maybe telling the code to sync up *all* node processor_id() values will fix that. Let me know whether or not that works? This is a frighteningly tricky part of the code; you can gawk at the current state of my failed attempts to improve load balancing of processor ids in https://github.com/libMesh/libmesh/pull/1621 in fact. The good news about that PR is it has me digging into corner cases here myself, so hopefully when I'm finished it will fix your code too if my suggested fix above doesn't. The bad news is that there's also a chance of me immediately re-*breaking* your code even if my suggested fix above works - if you wouldn't mind, I'll let you know when the PR is ready so you can run your own tests, just in case they catch something that our own CI misses. --- Roy |
From: Salazar De T. M. <sal...@ll...> - 2018-03-19 19:02:03
|
I found a slight difference between the trace files: The traceout_8_142118.txt contains libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 whereas traceout_57_85461.txt and traceout_11_104555.txt : libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1609 Not sure if this helps. -- On 3/18/18, 12:36 PM, "Salazar De Troya, Miguel" <sal...@ll...> wrote: Hello, Running a big problem (1,601,777 elements) on 100 processors. I am using a DistributedMesh. At some point, I call MeshRefinement::refine_and_coarsen_elements() to do AMR, but I get this assertion error (running in debug mode): Assertion `min_id == node->processor_id()' failed. min_id = 11 node->processor_id() = 57 Assertion `max_id == node->processor_id()' failed. max_id = 57 node->processor_id() = 11 Assertion `max_id == node->processor_id()' failed. max_id = 57 node->processor_id() = 11 I also obtain traceout files with numbers: traceout_57_85461.txt traceout_11_104555.txt traceout_8_142118.txt. Their content is similar and looks like this: [New LWP 142203] [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib64/libthread_db.so.1". 0x00002aaaba58fe09 in __libc_waitpid (pid=143882, stat_loc=stat_loc@entry=0x7fffffff4a90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40 40 int result = INLINE_SYSCALL (wait4, 4, pid, stat_loc, options, NULL); To enable execution of this file add add-auto-load-safe-path /usr/tce/packages/gcc/gcc-4.9.3/lib64/libstdc++.so.6.0.20-gdb.py line to your configuration file "/g/g92/miguel/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/g/g92/miguel/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" #0 0x00002aaaba58fe09 in __libc_waitpid (pid=143882, stat_loc=stat_loc@entry=0x7fffffff4a90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40 #1 0x00002aaaba512cc2 in do_system (line=line@entry=0x1e3ac078 "gdb -p 142118 -batch -ex bt -ex detach 2>/dev/null 1>temp_print_trace.4jiwUH") at ../sysdeps/posix/system.c:148 #2 0x00002aaaba513071 in __libc_system (line=0x1e3ac078 "gdb -p 142118 -batch -ex bt -ex detach 2>/dev/null 1>temp_print_trace.4jiwUH") at ../sysdeps/posix/system.c:189 #3 0x00002aaaad67e17b in (anonymous namespace)::gdb_backtrace (out_stream=...) at src/base/print_trace.C:162 #4 0x00002aaaad6806ab in libMesh::print_trace (out_stream=...) at src/base/print_trace.C:209 #5 0x00002aaaad67f7f4 in libMesh::write_traceout () at src/base/print_trace.C:239 #6 0x00002aaaad6769fb in libMesh::MacroFunctions::report_error (file=0x2320a <Address 0x2320a out of bounds>, line=-46448, date=0x0, time=0xffffffffffffffff <Address 0xffffffffffffffff out of bounds>) at src/base/libmesh_common.C:89 #7 0x00002aaaaebe174e in libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 #8 0x00002aaaaeba931e in libMesh::MeshTools::correct_node_proc_ids (mesh=...) at src/mesh/mesh_tools.C:1844 #9 0x00002aaaae69a0ce in libMesh::MeshCommunication::make_new_nodes_parallel_consistent (this=0x2320a, mesh=...) at src/mesh/mesh_communication.C:1776 #10 0x00002aaaaea95919 in libMesh::MeshRefinement::_refine_elements (this=0x2320a) at src/mesh/mesh_refinement.C:1601 #11 0x00002aaaaea6a4d1 in libMesh::MeshRefinement::refine_and_coarsen_elements (this=0x2320a) at src/mesh/mesh_refinement.C:578 #12 0x00002aaab9d69dcd in OptiProblem::solve (this=0x7fffffffabd8) at /g/g92/miguel/code/topsm/src/opti_problem.C:370 #13 0x00000000004371b8 in main (argc=4, argv=0x7fffffffb798) at /g/g92/miguel/code/topsm/test/3D_stress_constraint/linear_stress_opti.C:196 Are there other things I can do to debug this? Thanks Miguel ------------------------------------------------------------------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot _______________________________________________ Libmesh-users mailing list Lib...@li... https://lists.sourceforge.net/lists/listinfo/libmesh-users |
From: David K. <dav...@ak...> - 2018-03-19 12:13:24
|
On Mon, Mar 19, 2018 at 5:19 AM, SKang <ss...@pu...> wrote: > Hello, all.I try to solve an RB problem similar to the RB example 5.Iused > multiple domains with lengthscaling factors.So I need to plot themeshto > match the scaling given by the input value to each subdomain.For the > libmesh code to plot, I will refer to "scale_mesh_and_plot" in the RB > Example 5.However, I don't know how to obtain subdomain information from > nodes.So I want to ask two questions.1. Please let me know if there is a > similar code to the following code for a node.------------------------- > ---------------------------------------------------------------Elem * > elem = &c.get_elem();if(elem -> subdomain_id() == > AREA_ID1)--------------------------------------------------- > --------------------------------------2. If not, I want to know another > way to obtain subdomain information from nodes.Thank you.Best regards,Kang > In libMesh subdomains are for elements, not for nodes. If you want to associate a subdomain ID with nodes, you could define a map from node ID to subdomain ID, and then loop over the elements of the mesh to get the subdomain ID, and then loop over the nodes of each element and fill in the map. Of course, the subdomains in this approach are not unique since elements with different subdomain IDs can link to the same node, so with this approach you would store the subdomain ID for one of the elements that touches each node. David |
From: SKang <ss...@pu...> - 2018-03-19 09:19:28
|
Hello, all.I try to solve an RB problem similar to the RB example 5.Iused multiple domains with lengthscaling factors.So I need to plot themeshto match the scaling given by the input value to each subdomain.For the libmesh code to plot, I will refer to "scale_mesh_and_plot" in the RB Example 5.However, I don't know how to obtain subdomain information from nodes.So I want to ask two questions.1. Please let me know if there is a similar code to the following code for a node.----------------------------------------------------------------------------------------Elem * elem = &c.get_elem();if(elem -> subdomain_id() == AREA_ID1)-----------------------------------------------------------------------------------------2. If not, I want to know another way to obtain subdomain information from nodes.Thank you.Best regards,Kang ------------------------------------------------------------ ShinseongKang GraduateStudent PusanNationalUniversity H.P.:010-9770-6595 E-mail:ss...@pu... ------------------------------------------------------------ |
From: SKang <ss...@pu...> - 2018-03-19 08:48:13
|
Thank you, David.I solved this problem.The command "mpirun" in Gmsh ran instead of "mpich" I installed in PETSc.So I removed Gmsh and all mpi* and rebuilt libMesh.I also fixed my mistake that did not set the path rightly when I installed PTESc.Thank youagain for your help.Best regards,Kang----- Original Message -----From : David Knezevic <dav...@ak...>To : "강신성" <ss...@pu...>Cc : "Libmesh user group" <lib...@li...>Sent : 2018-02-13 11:30:41Subject : Re: Re: [Libmesh-users] Using multi-core in execution filesThis has happened to me before when I had two MPIs installed on my system (e.g. mpich and openmpi) so that libMesh links to one during configuration and the other is used during "mpirun".I suggest that you make sure there is only one MPI on your system and then rebuild libMesh and try again.DavidOn Mon, Feb 12, 2018 at 9:19 PM, 강신성 <ss...@pu...> wrote: Thanks for your reply, David. I tried to the command "mpirun -np N" in the RB example 5, but there was a problemthat the same code is running repeatedly as follows. ==============================<wbr>== $ mpirun -np 3 ./example-opt ... (skip) ... ---- Performing Greedy basis enrichment ---- ---- Basis dimension: 0 ---- Performing RB solves on training set Maximum error bound is 90.1602 Performing truth solve at parameter: load_Fx: -3.274396e+00 load_Fy: -4.127732e+00 load_Fz: 9.887087e-01 point_load_Fx: -4.856018e+00 point_load_Fy: -4.622528e+00 point_load_Fz: 4.774231e+00 x_scaling: 1.146354e+00 ---- Performing Greedy basis enrichment ---- ---- Basis dimension: 0 ---- Performing RB solves on training set Maximum error bound is 90.1602 Performing truth solve at parameter: load_Fx: -3.274396e+00 load_Fy: -4.127732e+00 load_Fz: 9.887087e-01 point_load_Fx: -4.856018e+00 point_load_Fy: -4.622528e+00 point_load_Fz: 4.774231e+00 x_scaling: 1.146354e+00 ---- Performing Greedy basis enrichment ---- ---- Basis dimension: 0 ---- Performing RB solves on training set Maximum error bound is 90.1602 Performing truth solve at parameter: load_Fx: -3.274396e+00 load_Fy: -4.127732e+00 load_Fz: 9.887087e-01 point_load_Fx: -4.856018e+00 point_load_Fy: -4.622528e+00 point_load_Fz: 4.774231e+00 x_scaling: 1.146354e+00 Enriching the RB space Enriching the RB space Updating RB matrices Updating RB matrices Updating RB residual terms Updating RB residual terms Enriching the RB space Updating RB matrices Updating RB residual terms ---- Basis dimension: 1 ---- Performing RB solves on training set ---- Basis dimension: 1 ---- Performing RB solves on training set Maximum error bound is 337.302 Performing truth solve at parameter: load_Fx: -1.545951e-01 load_Fy: -2.327825e+00 load_Fz: 3.578962e+00 point_load_Fx: 8.882769e-01 point_load_Fy: 5.864229e-01 point_load_Fz: 4.982374e+00 x_scaling: 1.228777e+00 Maximum error bound is 337.302 Performing truth solve at parameter: load_Fx: -1.545951e-01 load_Fy: -2.327825e+00 load_Fz: 3.578962e+00 point_load_Fx: 8.882769e-01 point_load_Fy: 5.864229e-01 point_load_Fz: 4.982374e+00 x_scaling: 1.228777e+00 ---- Basis dimension: 1 ---- Performing RB solves on training set Maximum error bound is 337.302 Performing truth solve at parameter: load_Fx: -1.545951e-01 load_Fy: -2.327825e+00 load_Fz: 3.578962e+00 point_load_Fx: 8.882769e-01 point_load_Fy: 5.864229e-01 point_load_Fz: 4.982374e+00 x_scaling: 1.228777e+00 Enriching the RB space Updating RB matrices Enriching the RB space Updating RB matrices Updating RB residual terms Updating RB residual terms Enriching the RB space Updating RB matrices Updating RB residual terms ---- Basis dimension: 2 ---- Performing RB solves on training set ---- Basis dimension: 2 ---- Performing RB solves on training set Maximum error bound is 469.21 ... (skip) ... ---- Basis dimension: 15 ---- Performing RB solves on training set ---- Basis dimension: 15 ---- Performing RB solves on training set ---- Basis dimension: 15 ---- Performing RB solves on training set Maximum error bound is 2.50754 Maximum number of basis functions reached: Nmax = 15 *** Warning, This code is deprecated, and likely to be removed in future library versions! ./include/libmesh/mesh_tools.<wbr>h, line 70, compiled Aug 25 2017 at 02:15:21 *** Maximum error bound is 2.50754 Maximum number of basis functions reached: Nmax = 15 *** Warning, This code is deprecated, and likely to be removed in future library versions! ./include/libmesh/mesh_tools.<wbr>h, line 70, compiled Aug 25 2017 at 02:15:21 *** Maximum error bound is 2.50754 Maximum number of basis functions reached: Nmax = 15 *** Warning, This code is deprecated, and likely to be removed in future library versions! ./include/libmesh/mesh_tools.<wbr>h, line 70, compiled Aug 25 2017 at 02:15:21 *** (end) ==============================<wbr>============================ Other examples had the same problem, too. Did I type the command wrong? If not, I want to know what is wrong. Best regards, Kang ----- Original Message -----From: David Knezevic <dav...@ak...>To: "S. Kang" <ss...@pu...>Cc: Libmesh user group <libmesh-users@lists.<wbr>sourceforge.net>Date: Mon, 12 Feb 2018 22:28:09 +0900 (GMT)subject: Re: [Libmesh-users] Using multi-core in execution files Hello, You can run the reduced basis code in parallel. You do it just the same way that you run any other libMesh code in parallel, i.e. just run with "mpirun -np N" where N specifies the number of processors that you want to use. If you want more info on the parallelization approach that is used, please refer to this paper. Best, David On Sun, Feb 11, 2018 at 9:30 PM, S. Kang <ss...@pu...> wrote: Hello everyone, I am solving RB problems, but it takes too long because of many basis vectors. So, similar to "make -j 10", I want to know how to use multi-core in execution files, not parallel programming. Best regards, S. Kang. ------------------------------ ------------------------------ Shinseong Kang Graduate Student Pusan National University, South Korea H.P.: 010-9770-6595 E-mail: ss...@pu... ------------------------------ ------------------------------ ------------------------------ ------------------------------ ------------------ Check out the vibrant tech community on one of the world's most engaging tech sites, Slashdot.org! http://sdm.link/slashdot ______________________________ _________________ Libmesh-users mailing list Lib...@li...urceforg<wbr>e.net https://lists.sourceforge.net/<wbr>lists/listinfo/libmesh-users |
From: Salazar De T. M. <sal...@ll...> - 2018-03-18 19:35:31
|
Hello, Running a big problem (1,601,777 elements) on 100 processors. I am using a DistributedMesh. At some point, I call MeshRefinement::refine_and_coarsen_elements() to do AMR, but I get this assertion error (running in debug mode): Assertion `min_id == node->processor_id()' failed. min_id = 11 node->processor_id() = 57 Assertion `max_id == node->processor_id()' failed. max_id = 57 node->processor_id() = 11 Assertion `max_id == node->processor_id()' failed. max_id = 57 node->processor_id() = 11 I also obtain traceout files with numbers: traceout_57_85461.txt traceout_11_104555.txt traceout_8_142118.txt. Their content is similar and looks like this: [New LWP 142203] [Thread debugging using libthread_db enabled] Using host libthread_db library "/usr/lib64/libthread_db.so.1". 0x00002aaaba58fe09 in __libc_waitpid (pid=143882, stat_loc=stat_loc@entry=0x7fffffff4a90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40 40 int result = INLINE_SYSCALL (wait4, 4, pid, stat_loc, options, NULL); To enable execution of this file add add-auto-load-safe-path /usr/tce/packages/gcc/gcc-4.9.3/lib64/libstdc++.so.6.0.20-gdb.py line to your configuration file "/g/g92/miguel/.gdbinit". To completely disable this security protection add set auto-load safe-path / line to your configuration file "/g/g92/miguel/.gdbinit". For more information about this security protection see the "Auto-loading safe path" section in the GDB manual. E.g., run from the shell: info "(gdb)Auto-loading safe path" #0 0x00002aaaba58fe09 in __libc_waitpid (pid=143882, stat_loc=stat_loc@entry=0x7fffffff4a90, options=options@entry=0) at ../sysdeps/unix/sysv/linux/waitpid.c:40 #1 0x00002aaaba512cc2 in do_system (line=line@entry=0x1e3ac078 "gdb -p 142118 -batch -ex bt -ex detach 2>/dev/null 1>temp_print_trace.4jiwUH") at ../sysdeps/posix/system.c:148 #2 0x00002aaaba513071 in __libc_system (line=0x1e3ac078 "gdb -p 142118 -batch -ex bt -ex detach 2>/dev/null 1>temp_print_trace.4jiwUH") at ../sysdeps/posix/system.c:189 #3 0x00002aaaad67e17b in (anonymous namespace)::gdb_backtrace (out_stream=...) at src/base/print_trace.C:162 #4 0x00002aaaad6806ab in libMesh::print_trace (out_stream=...) at src/base/print_trace.C:209 #5 0x00002aaaad67f7f4 in libMesh::write_traceout () at src/base/print_trace.C:239 #6 0x00002aaaad6769fb in libMesh::MacroFunctions::report_error (file=0x2320a <Address 0x2320a out of bounds>, line=-46448, date=0x0, time=0xffffffffffffffff <Address 0xffffffffffffffff out of bounds>) at src/base/libmesh_common.C:89 #7 0x00002aaaaebe174e in libMesh::MeshTools::libmesh_assert_parallel_consistent_procids<libMesh::Node> (mesh=...) at src/mesh/mesh_tools.C:1608 #8 0x00002aaaaeba931e in libMesh::MeshTools::correct_node_proc_ids (mesh=...) at src/mesh/mesh_tools.C:1844 #9 0x00002aaaae69a0ce in libMesh::MeshCommunication::make_new_nodes_parallel_consistent (this=0x2320a, mesh=...) at src/mesh/mesh_communication.C:1776 #10 0x00002aaaaea95919 in libMesh::MeshRefinement::_refine_elements (this=0x2320a) at src/mesh/mesh_refinement.C:1601 #11 0x00002aaaaea6a4d1 in libMesh::MeshRefinement::refine_and_coarsen_elements (this=0x2320a) at src/mesh/mesh_refinement.C:578 #12 0x00002aaab9d69dcd in OptiProblem::solve (this=0x7fffffffabd8) at /g/g92/miguel/code/topsm/src/opti_problem.C:370 #13 0x00000000004371b8 in main (argc=4, argv=0x7fffffffb798) at /g/g92/miguel/code/topsm/test/3D_stress_constraint/linear_stress_opti.C:196 Are there other things I can do to debug this? Thanks Miguel |
From: David K. <dav...@ak...> - 2018-03-17 14:07:06
|
On Sat, Mar 17, 2018 at 3:50 AM, 吴家桦Gauvain <cau...@gm...> wrote: > Hi all, > > In order to know the details about the POD-Greedy algorithm implemented > in TransientRBSystem for reduced basis construction, I referred to the > paper > > *B. Haasdonk, M. Ohlberger, Reduced basis method for finite volume > approximations of parametrized evolution equations, M2AN (Math. Model. > Numer. Anal.) 42 (2) (2008) 277–302.* > > invoked by > > *A high-performance parallel implementation of the certified reduced basis > method **David J. Knezevic , John W. Peterson b* > > when the TransientRBSystem is discussed. However, the algorithms proposed > by the former are PCA with fixspace and the selection of time index > associate with the largest error bound change. Neither of them mentions the > POD, which makes me confused. Could anyone give me more information on > the POD-Greedy > algorithm used in libmesh? Thanks in advance. > That paper by Haasdonk and Ohlberger is the main reference for POD-Greedy. I guess PCA is equivalent to POD in this context. You can refer to this paper <https://dspace.mit.edu/openaccess-disseminate/1721.1/61956> (Section 4.2) for another description of the approach. David |
From: 吴家桦Gauvain <cau...@gm...> - 2018-03-17 07:50:59
|
Hi all, In order to know the details about the POD-Greedy algorithm implemented in TransientRBSystem for reduced basis construction, I referred to the paper *B. Haasdonk, M. Ohlberger, Reduced basis method for finite volume approximations of parametrized evolution equations, M2AN (Math. Model. Numer. Anal.) 42 (2) (2008) 277–302.* invoked by *A high-performance parallel implementation of the certified reduced basis method **David J. Knezevic , John W. Peterson b* when the TransientRBSystem is discussed. However, the algorithms proposed by the former are PCA with fixspace and the selection of time index associate with the largest error bound change. Neither of them mentions the POD, which makes me confused. Could anyone give me more information on the POD-Greedy algorithm used in libmesh? Thanks in advance. Regards, Gauvain -- |
From: Roy S. <roy...@ic...> - 2018-03-13 20:56:12
|
On Tue, 13 Mar 2018, Vasileios Vavourakis wrote: > oh, I thought that the default setting in the Parallel::Communicator (i.e. see default > constructor: http://libmesh.github.io/doxygen/classlibMesh_1_1Parallel_1_1Communicator.html#a697f8e599333609a45761828e14659c1) for the > “communicator” is: MPI_COMM_SELF It is, but init.comm() isn't a default-constructor communicator, it's a default-for-most-users communicator. Typically when someone runs on N processors it's because they want everything parallelized between N processors, so we make it easy to get an All-N-Processors communicator. > unless it gets initialised to MPI_COMM_WORLD somewhere in the > LibMeshInit - apologies, i’m not always successful looking for some > details in the library documentation :( That's a very polite way to say "Why don't you even have a single line of documentation for LibMeshInit::comm()?" I would have been tempted to phrase that with much more cursing. I'll put together a PR with better comments now, so it'll get into the online Doxygen eventually. > I will give it a spin and update libmesh-users asap... Thanks, --- Roy |
From: Vasileios V. <va...@gm...> - 2018-03-13 19:36:46
|
thanks Roy for the quick turnaround. my comments below to your reply… > On 13 Mar 2018, at 21:02, Roy Stogner <roy...@ic...> wrote: > > > On Tue, 13 Mar 2018, Vasileios Vavourakis wrote: > >> I would like to run (in parallel mode) a piece of libMesh code where on >> each processor, a libMesh::Mesh is initialised & stored locally and, hence, >> subsequently initialise the EquationSystems accordingly, and for each >> processor independently. >> >> The FE mesh, may be same for all processors, or different, but in principle >> the information need to be stored independently (and partitioned in one >> subdomain/partition) without any problems when running things in parallel. >> >> I have tried to enforce the partitioning of the mesh via, e.g.: >> >> LibMeshInit init(argc, argv); >> Mesh msh(init.comm(), 1); >> //...import the mesh... >> msh.prepare_for_use(); >> msh.partition(1); >> // >> EquationSystems es(msh); >> // ...add a system here... >> es.init(); >> // ...solve the system, etc... >> >> However, I noticed that should I run the code in that many processes as >> many as the number of elements of the mesh, then it is ok; else it freezes >> (especially at a point where I "update_global_solution". >> Does this make sense to you? > > Yes, I'm afraid so. Although you might have told the mesh to give > every element to processor 0, when you created the mesh with > init.comm(), you made every processor on that communicator (i.e. every > processor in this case, since init defaults to MPI_COMM_WORLD) an > owner of that mesh. When you do collective operations on a mesh, even > processors who don't own any elements on the mesh must be involved if > they're part of that mesh's communicator. oh, I thought that the default setting in the Parallel::Communicator (i.e. see default constructor: http://libmesh.github.io/doxygen/classlibMesh_1_1Parallel_1_1Communicator.html#a697f8e599333609a45761828e14659c1 <http://libmesh.github.io/doxygen/classlibMesh_1_1Parallel_1_1Communicator.html#a697f8e599333609a45761828e14659c1>) for the “communicator” is: MPI_COMM_SELF unless it gets initialised to MPI_COMM_WORLD somewhere in the LibMeshInit - apologies, i’m not always successful looking for some details in the library documentation :( > >> Any suggestions / tips? >> >> I am afraid there might be an easy way to do it, however, I wanted to have >> your opinion about it. > > I *think* the thing to do would be to create a new Parallel::Communicator > wrapper around MPI_COMM_SELF, then use that to create a local Mesh. done; will create a separate Communicator object, e.g.: LibMeshInit init(argc, argv); Communicator lcomm(MPI_COMM_SELF); Mesh msh(lcomm, 1); //...import the mesh... msh.prepare_for_use(); // EquationSystems es(msh); // ...add a system here... es.init(); // ...solve the system, etc... I will give it a spin and update libmesh-users asap... cheers, Vasileios > > I've never done that before, though. If you do it and it works, we'd > love to have a unit test to make sure it *stays* working through > future library updates. If you do it and it doesn't work, let us know > and (especially if you can set up a failing test case) I'll try to > help figure out what's wrong. > --- > Roy |
From: Roy S. <roy...@ic...> - 2018-03-13 19:02:30
|
On Tue, 13 Mar 2018, Vasileios Vavourakis wrote: > I would like to run (in parallel mode) a piece of libMesh code where on > each processor, a libMesh::Mesh is initialised & stored locally and, hence, > subsequently initialise the EquationSystems accordingly, and for each > processor independently. > > The FE mesh, may be same for all processors, or different, but in principle > the information need to be stored independently (and partitioned in one > subdomain/partition) without any problems when running things in parallel. > > I have tried to enforce the partitioning of the mesh via, e.g.: > > LibMeshInit init(argc, argv); > Mesh msh(init.comm(), 1); > //...import the mesh... > msh.prepare_for_use(); > msh.partition(1); > // > EquationSystems es(msh); > // ...add a system here... > es.init(); > // ...solve the system, etc... > > However, I noticed that should I run the code in that many processes as > many as the number of elements of the mesh, then it is ok; else it freezes > (especially at a point where I "update_global_solution". > Does this make sense to you? Yes, I'm afraid so. Although you might have told the mesh to give every element to processor 0, when you created the mesh with init.comm(), you made every processor on that communicator (i.e. every processor in this case, since init defaults to MPI_COMM_WORLD) an owner of that mesh. When you do collective operations on a mesh, even processors who don't own any elements on the mesh must be involved if they're part of that mesh's communicator. > Any suggestions / tips? > > I am afraid there might be an easy way to do it, however, I wanted to have > your opinion about it. I *think* the thing to do would be to create a new Parallel::Communicator wrapper around MPI_COMM_SELF, then use that to create a local Mesh. I've never done that before, though. If you do it and it works, we'd love to have a unit test to make sure it *stays* working through future library updates. If you do it and it doesn't work, let us know and (especially if you can set up a failing test case) I'll try to help figure out what's wrong. --- Roy |
From: Vasileios V. <va...@gm...> - 2018-03-13 14:18:50
|
Dear libMesh users/developers, I would like to run (in parallel mode) a piece of libMesh code where on each processor, a libMesh::Mesh is initialised & stored locally and, hence, subsequently initialise the EquationSystems accordingly, and for each processor independently. The FE mesh, may be same for all processors, or different, but in principle the information need to be stored independently (and partitioned in one subdomain/partition) without any problems when running things in parallel. I have tried to enforce the partitioning of the mesh via, e.g.: LibMeshInit init(argc, argv); Mesh msh(init.comm(), 1); //...import the mesh... msh.prepare_for_use(); msh.partition(1); // EquationSystems es(msh); // ...add a system here... es.init(); // ...solve the system, etc... However, I noticed that should I run the code in that many processes as many as the number of elements of the mesh, then it is ok; else it freezes (especially at a point where I "update_global_solution". Does this make sense to you? Any suggestions / tips? I am afraid there might be an easy way to do it, however, I wanted to have your opinion about it. cheers, Vasileios |
From: David K. <dav...@ak...> - 2018-03-09 20:02:56
|
On Fri, Mar 9, 2018 at 2:53 PM, Roy Stogner <roy...@ic...> wrote: > > On Fri, 9 Mar 2018, David Knezevic wrote: > > - If I were to look into adding a special case for non-uniform 1st >> to 2nd order refinement for LAGRANGE variables, do you think this >> would be of interest to include that in libMesh, or would it be too >> specific to include? (I'd like to know if it's potentially of >> broader interest before looking further into this.) >> > > Uninteresting to me, but I try like mad to avoid writing > LAGRANGE-specific code, so if I want C0 p refinement I just use > HIERARCHIC. Others who've coded themselves too far into a > LAGRANGE-only corner might disagree... but from what I've seen the > very first way to screw up is to assume that every node has a dof for > every variable, and any code like that will *still* be broken if those > users have second-order geometric elements (so they can support p=2) > but start with p=1. > > - How complex do you think it would be to add that special case? >> > > Not very. I personally don't think it's worth it when you'd just end > up stuck restricted by p<=2 anyways, but if anyone else disagrees I'd > still happily merge their work. ;-) > OK, thanks for that info. I'll think some more about whether this is worth working on or not for my use-case. Thanks, David |
From: Roy S. <roy...@ic...> - 2018-03-09 19:53:58
|
On Fri, 9 Mar 2018, David Knezevic wrote: > - If I were to look into adding a special case for non-uniform 1st > to 2nd order refinement for LAGRANGE variables, do you think this > would be of interest to include that in libMesh, or would it be too > specific to include? (I'd like to know if it's potentially of > broader interest before looking further into this.) Uninteresting to me, but I try like mad to avoid writing LAGRANGE-specific code, so if I want C0 p refinement I just use HIERARCHIC. Others who've coded themselves too far into a LAGRANGE-only corner might disagree... but from what I've seen the very first way to screw up is to assume that every node has a dof for every variable, and any code like that will *still* be broken if those users have second-order geometric elements (so they can support p=2) but start with p=1. > - How complex do you think it would be to add that special case? Not very. I personally don't think it's worth it when you'd just end up stuck restricted by p<=2 anyways, but if anyone else disagrees I'd still happily merge their work. ;-) --- Roy |
From: David K. <dav...@ak...> - 2018-03-09 19:32:32
|
On Fri, Mar 9, 2018 at 2:21 PM, Roy Stogner <roy...@ic...> wrote: > p-refinement is supported, but adaptive p-refinement currently isn't. > We require basis functions to be hierarchic (not necessarily > HIERARCHIC) to generate constraint equations between neighboring > elements of different p refinement levels. > > So you can do uniform refinement for error estimation, but that's > about it. > OK, got it, thanks. Two questions: - If I were to look into adding a special case for non-uniform 1st to 2nd order refinement for LAGRANGE variables, do you think this would be of interest to include that in libMesh, or would it be too specific to include? (I'd like to know if it's potentially of broader interest before looking further into this.) - How complex do you think it would be to add that special case? David |
From: Roy S. <roy...@ic...> - 2018-03-09 19:21:28
|
p-refinement is supported, but adaptive p-refinement currently isn't. We require basis functions to be hierarchic (not necessarily HIERARCHIC) to generate constraint equations between neighboring elements of different p refinement levels. So you can do uniform refinement for error estimation, but that's about it. --- Roy |
From: David K. <dav...@ak...> - 2018-03-09 19:06:51
|
I'm using LAGRANGE variables for an elasticity problem and I'd like to be able to increase the variable order from 1 to 2 specified elements. Is it possible to do this using p-refinement in libMesh? I had a vague impression that p-refinement was only supported for discontinuous or hierachic basis functions, but from looking at the fe_lagrange*.C code it seems that it does use elem->p_level(), so I wanted to check if libMesh already handles this case? If so, does it automatically constrain with hanging nodes on the interface of first and second order elements (I guess this is almost the same as AMR so maybe it's already supported)? Thanks, David |
From: John P. <jwp...@gm...> - 2018-03-08 22:11:47
|
Hello all, There have been quite a few changes since the 1.2.x series in Summer 2017, the biggest of which is probably that libMesh now requires a C++11-conforming compiler. For a more detailed list of changes in this release, see: https://github.com/libMesh/libmesh/blob/v1.3.0-rc1/NEWS For links to the packaged tarballs, visit: https://github.com/libMesh/libmesh/releases/tag/v1.3.0-rc1 As always, we appreciate any feedback you can provide on your experiences with downloading and building this release. Thanks, John |
From: Roy S. <roy...@ic...> - 2018-03-08 16:00:12
|
On Thu, 8 Mar 2018, Moh...@en... wrote: > I have recently installed Libmesh and i try to use it. In fact, I > have my C++ code working on a single Gauss point. Is it possible to > use Libmesh with my code in order to obtain my finite element code > (like Moose framework). Unfortunately there isn't a user manual for > that. Is there someone who has already doing this? Lots of people; independent uses of libMesh predate (and based on published papers might still be more popular than) uses of middleware frameworks. But there's no user manual for it because there's no one right way to do it - if there was then we'd just encapsulate that way in the One True Framework and nobody would use anything else. Most independent libMesh application developers start with whichever of the examples is closest to what they're developing, and add complexity from there. --- Roy |
From: <Moh...@en...> - 2018-03-08 10:29:24
|
Hello, I have recently installed Libmesh and i try to use it. In fact, I have my C++ code working on a single Gauss point. Is it possible to use Libmesh with my code in order to obtain my finite element code (like Moose framework). Unfortunately there isn't a user manual for that. Is there someone who has already doing this? Best regards, Aziz |
From: Manav B. <bha...@gm...> - 2018-03-01 17:36:14
|
Hi, I am using DofMap::add_constrain_row() to constrain some dofs to zero: libMesh::DofConstraintRow c_row; dof_map.add_constraint_row(*dof_it, c_row, true); This is implemented in an object derived from System::Constrain. Then, I call System::reinit_constraints() before assembly. This has been working fine, but I have one case that is tripping the following error when I call SparseMatrix::add_matrix(). I am going to start to dig into this to figure out why this might be happening, but if there are any quick words of advice, that would be very helpful. I am a bit puzzled since this is for a diagonal entry. My understanding is that constraining dofs should not remove them from the sparsity pattern. I guess I am missing something here. Regards, Manav [1;31m[0]PETSC ERROR: --------------------- Error Message -------------------------------------------------------------- [0;39m[0;49m[0]PETSC ERROR: Argument out of range [0]PETSC ERROR: New nonzero at (7308,7308) caused a malloc Use MatSetOption(A, MAT_NEW_NONZERO_ALLOCATION_ERR, PETSC_FALSE) to turn off this check [0]PETSC ERROR: See http://www.mcs.anl.gov/petsc/documentation/faq.html for trouble shooting. [0]PETSC ERROR: Petsc Release Version 3.8.0, unknown [0]PETSC ERROR: /Users/manav/Library/Developer/Xcode/DerivedData/mast_workspace-hfyswujevmwgyugsmyueuaouvmkj/Build/Products/Debug/example_driver on a arch-darwin-c-opt named Dhcp-90-164.HPC.MsState.Edu by manav Thu Mar 1 10:42:54 2018 [0]PETSC ERROR: Configure options --prefix=/Users/manav/Documents/codes/numerical_lib/petsc/petsc/../ --CC=mpicc-openmpi-clang40 --CXX=mpicxx-openmpi-clang40 --FC=mpif90-openmpi-clang40 --with-fortran=0 --with-mpiexec=/opt/local/bin/mpiexec-openmpi-clang40 --with-shared-libraries=1 --with-x=1 --with-x-dir=/opt/X11 --with-debugging=0 --with-lapack-lib=/usr/lib/liblapack.dylib --with-blas-lib=/usr/lib/libblas.dylib --download-superlu=yes --download-superlu_dist=yes --download-suitesparse=yes --download-mumps=yes --download-scalapack=yes --download-parmetis=yes --download-metis=yes --download-hypre=yes --download-ml=yes [0]PETSC ERROR: #1 MatSetValues_SeqAIJ() line 481 in /Users/manav/Documents/codes/numerical_lib/petsc/petsc/src/mat/impls/aij/seq/aij.c [0]PETSC ERROR: #2 MatSetValues() line 1270 in /Users/manav/Documents/codes/numerical_lib/petsc/petsc/src/mat/interface/matrix.c |
From: Roy S. <roy...@ic...> - 2018-03-01 17:05:26
|
On Wed, 28 Feb 2018, David Knezevic wrote: > I would like to implement a periodic boundary condition on a model with > circular symmetry, e.g. solve on sector of a disk with periodicity. To > implement this it seems like all I'd need to do is subclass > PeriodicBoundary and override get_corresponding_pos() to impose the > appropriate rotation rather than just a translation, and then add the > PeriodicBoundary subclass object to the system in the usual way. Is that > indeed all that's required, Yes, if it's working correctly! > or would we need something more? If you have something vector or tensor valued, like e.g. a *velocity* variable, and your formulation doesn't already use polar coordinates for the components of that variable, then you're in trouble if you want anything other than 0/90/180/270 degree rotations, because we don't currently have any way to specify a periodic BC for one variable as a weighted sum of other variables. > Has anyone tried this case before? We actually have some CI coverage for it thanks to the MOOSE guys: tests/bcs/periodic/orthogonal_pbc_on_square.i --- Roy |
From: David K. <dav...@ak...> - 2018-03-01 17:04:50
|
On Thu, Mar 1, 2018 at 12:00 PM, John Peterson <jwp...@gm...> wrote: > > > On Thu, Mar 1, 2018 at 9:30 AM, David Knezevic <dav...@ak... > > wrote: > >> On Thu, Mar 1, 2018 at 10:50 AM, Roy Stogner <roy...@ic...> >> wrote: >> >> > >> > On Wed, 28 Feb 2018, David Knezevic wrote: >> > >> > I would like to implement a periodic boundary condition on a model with >> >> circular symmetry, e.g. solve on sector of a disk with periodicity. To >> >> implement this it seems like all I'd need to do is subclass >> >> PeriodicBoundary and override get_corresponding_pos() to impose the >> >> appropriate rotation rather than just a translation, and then add the >> >> PeriodicBoundary subclass object to the system in the usual way. Is >> that >> >> indeed all that's required, >> >> >> > >> > Yes, if it's working correctly! >> > >> > or would we need something more? >> >> >> > >> > If you have something vector or tensor valued, like e.g. a *velocity* >> > variable, and your formulation doesn't already use polar coordinates >> > for the components of that variable, then you're in trouble if you >> > want anything other than 0/90/180/270 degree rotations, because we >> > don't currently have any way to specify a periodic BC for one variable >> > as a weighted sum of other variables. >> >> >> Ah, OK. I was hoping to do cases other than 0/90/180/270 degree rotations, >> and I'm considering elasticity, hence (u,v,w) displacement variables. As a >> result I think the current implementation won't work for me since I'd need >> one variable to be a weighted sum of the others. >> >> I can look into adding this, any suggestions on where to start? >> >> >> >> > Has anyone tried this case before? >> >> >> > >> > We actually have some CI coverage for it thanks to the MOOSE guys: >> > tests/bcs/periodic/orthogonal_pbc_on_square.i >> > >> >> Thanks, I'll have a look. >> >> >> John, regarding this: >> >> We have support for user-defined forward and inverse periodic boundary >> > transform functions in MOOSE if you'd like to check and see how it's >> done >> > there. >> >> >> I'd be interested to check that, can you point me to the relevant part of >> MOOSE? >> > > It's just implemented by overriding get_corresponding_pos() as you said. > > https://github.com/idaholab/moose/blob/devel/framework/src/bcs/ > FunctionPeriodicBoundary.C > OK, thanks! David |