You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: John P. <jwp...@gm...> - 2018-06-27 22:01:05
|
On Wed, Jun 27, 2018 at 3:58 PM, Renato Poli <re...@gm...> wrote: > Hi, > > I need to evaluate the "greatest" value of a vector that is created in > parallel. It is a matter of iterating over all the local elements and > calculate the parts of this distributed vector. > > What is the best way to syncronize this vector to all processors - so they > can have the whole set of data and can find out the "greatest" value? Any > example to suggest? > Are you using NumericVector? It has a max() member. -- John |
From: Renato P. <re...@gm...> - 2018-06-27 21:58:16
|
Hi, I need to evaluate the "greatest" value of a vector that is created in parallel. It is a matter of iterating over all the local elements and calculate the parts of this distributed vector. What is the best way to syncronize this vector to all processors - so they can have the whole set of data and can find out the "greatest" value? Any example to suggest? Thanks, Renato |
From: John P. <jwp...@gm...> - 2018-06-27 14:17:09
|
On Wed, Jun 27, 2018 at 7:42 AM, Bailey Curzadd <bcu...@gm...> wrote: > Hi there, > > > I’m using libMesh to perform topology optimization on a mesh with a > spherical surface. I only have a single LinearImplicitSystem. The normal > displacement of nodes on the spherical surface needs to be constrained to > force the surface to remain spherical. I am using Lagrange multipliers to > do this, but have found that adding a large number of scalar variables > results in EquationSystems::init() taking an extraordinary amount of time > to build the system. For a mesh with 9585 nodes, 45728 elements, and > first-order interpolation, EquationSystems::init() takes practically no > time at all without scalar variables for the MPCs. However, when I add 1332 > scalar variables (one for each node on the spherical surface, > EquationSystems::init() runs for over an hour. Is this to be expected with > libMesh? Is there a more efficient way to apply MPCs to numerous nodes? > We do have support for per-subdomain SCALAR variable coupling, but AFAIK, SCALAR variables couple to *every* other dof (including other SCALAR variables) in the System, and so can result in not-very-sparse sparsity patterns for System matrices. This can lead to a lot of memory being allocated (which can slow things down if your machine goes into swap) but it might also be exposing an inefficiency in the DofMap or other code... Would it be possible to discretize your Lagrange multiplier variable using a (linear Lagrange, constant monomial, etc.) field variable instead of a bunch of SCALARS? -- John |
From: Bailey C. <bcu...@gm...> - 2018-06-27 13:42:53
|
Hi there, I’m using libMesh to perform topology optimization on a mesh with a spherical surface. I only have a single LinearImplicitSystem. The normal displacement of nodes on the spherical surface needs to be constrained to force the surface to remain spherical. I am using Lagrange multipliers to do this, but have found that adding a large number of scalar variables results in EquationSystems::init() taking an extraordinary amount of time to build the system. For a mesh with 9585 nodes, 45728 elements, and first-order interpolation, EquationSystems::init() takes practically no time at all without scalar variables for the MPCs. However, when I add 1332 scalar variables (one for each node on the spherical surface, EquationSystems::init() runs for over an hour. Is this to be expected with libMesh? Is there a more efficient way to apply MPCs to numerous nodes? Bailey C |
From: Griffith, B. E. <bo...@em...> - 2018-06-20 16:22:20
|
> On Jun 19, 2018, at 11:58 PM, Roy Stogner <roy...@ic...> wrote: > > > On Tue, 12 Jun 2018, Griffith, Boyce Eugene wrote: > >> OK, so something like: >> >> partiontiner->repartition(); >> equation_systems->reinit(); >> >> and then continue? > > Did this work for you? > > The code above is what I'd have written as v0.1 too, but I added a > quick unit test to make sure and (at least with DistributedMesh) no > luck. I'll see what I can figure out; surely you can do this without > a whole prepare_for_use()… I haven’t gotten a chance to implement the new partitioner yet. We will be using ReplicatedMesh for the foreseeable future. — Boyce |
From: Roy S. <roy...@ic...> - 2018-06-20 13:46:36
|
On Tue, 12 Jun 2018, Vinicius C. Reis wrote: > Hi John, I pasted a somewhat smaller version of my code below. By playing > with the refinement steps limit and the commented out init() and reinit() > lines you should be able to reproduce the two exceptions I got. Could you email me a copy of the mesh.msh you're using? (or replace it with a generated mesh, but the file should be enough) Thanks, --- Roy |
From: Roy S. <roy...@ic...> - 2018-06-20 03:58:24
|
On Tue, 12 Jun 2018, Griffith, Boyce Eugene wrote: > OK, so something like: > > partiontiner->repartition(); > equation_systems->reinit(); > > and then continue? Did this work for you? The code above is what I'd have written as v0.1 too, but I added a quick unit test to make sure and (at least with DistributedMesh) no luck. I'll see what I can figure out; surely you can do this without a whole prepare_for_use()... --- Roy |
From: Michael P. <mpo...@pu...> - 2018-06-13 19:27:38
|
On 06/13/2018 03:15 PM, John Peterson wrote: > > > On Wed, Jun 13, 2018 at 11:03 AM, Michael Povolotskyi > <mpo...@pu... <mailto:mpo...@pu...>> wrote: > > Hello John, > > I played with it, and this is what worked for me. > > #include <vector> > #include <sstream> > using namespace std; > > #define private public//to get access to > libMesh::LibMeshInit._vtk_mpi_controller > > #include "libmesh/mesh_generation.h" > #include "libmesh/mesh.h" > #include "libmesh/vtk_io.h" > #include "libmesh/libmesh.h" > #include "vtkMPIController.h" > int main(int argc, char ** argv) > { > > MPI_Init(&argc, &argv); > > { > libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); > int rank; > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > > vtkMPICommunicator* mcomm = > vtkMPICommunicator::GetWorldCommunicator()->NewInstance (); > mcomm->SplitInitialize (mcomm, rank, 0); > > init._vtk_mpi_controller->SetCommunicator (mcomm); > > libMesh::Mesh > mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); > libMesh::MeshTools::Generation::build_cube (mesh, > 10, 10, 5, 0.0, 2.0, 0.0, 3.0, 0.0, 4.0,libMesh::HEX8); > > > if (rank == 0) > { > libMesh::VTKIO out(mesh); > out.write("mesh1.pvtu"); > } > } > MPI_Finalize(); > return 0; > > > > } > > > What do you think? Can you provide an access to > libMesh::LibMeshInit._vtk_mpi_controller? > Or there is a better solution? > > > I think it would be reasonable to provide a public accessor for > _vtk_mpi_controller, as long as we can still get away with a forward > declaration for vtkMPIController. > > Would it be possible to do the same algorithm from within > VTKIO::read()? At that point, we have a Communicator (mesh.comm()) for > the Mesh we are currently writing, so we could probably do the same > New/Split/Set series of steps there, but unfortunately we don't have > access to the LibMeshInit at that point... > > -- > John Would you add the access to _vtk_mpi_controller for me? I can do it myself, but I prefer to be consistent with you. It looks to me that for a regular solution you need to reset the vtkMPICommunicator before any read/write event. Can you store the vtkMPICommunicator inside the mesh class? Michael. |
From: John P. <jwp...@gm...> - 2018-06-13 19:15:33
|
On Wed, Jun 13, 2018 at 11:03 AM, Michael Povolotskyi <mpo...@pu...> wrote: > Hello John, > > I played with it, and this is what worked for me. > > #include <vector> > #include <sstream> > using namespace std; > > #define private public //to get access to libMesh::LibMeshInit._vtk_mpi_ > controller > > #include "libmesh/mesh_generation.h" > #include "libmesh/mesh.h" > #include "libmesh/vtk_io.h" > #include "libmesh/libmesh.h" > #include "vtkMPIController.h" > int main(int argc, char ** argv) > { > > MPI_Init(&argc, &argv); > > { > libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); > int rank; > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > > vtkMPICommunicator* mcomm = vtkMPICommunicator:: > GetWorldCommunicator()->NewInstance (); > mcomm->SplitInitialize (mcomm, rank, 0); > > init._vtk_mpi_controller->SetCommunicator (mcomm); > > libMesh::Mesh mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); > libMesh::MeshTools::Generation::build_cube (mesh, > 10, 10, 5, 0.0, 2.0, 0.0, > 3.0, 0.0, 4.0,libMesh::HEX8); > > > if (rank == 0) > { > libMesh::VTKIO out(mesh); > out.write("mesh1.pvtu"); > } > } > MPI_Finalize(); > return 0; > > > > } > > What do you think? Can you provide an access to > libMesh::LibMeshInit._vtk_mpi_controller? > Or there is a better solution? > I think it would be reasonable to provide a public accessor for _vtk_mpi_controller, as long as we can still get away with a forward declaration for vtkMPIController. Would it be possible to do the same algorithm from within VTKIO::read()? At that point, we have a Communicator (mesh.comm()) for the Mesh we are currently writing, so we could probably do the same New/Split/Set series of steps there, but unfortunately we don't have access to the LibMeshInit at that point... -- John |
From: Michael P. <mpo...@pu...> - 2018-06-13 17:57:04
|
Thank you, now I see. On 06/13/2018 01:29 PM, Roy Stogner wrote: > > On Wed, 13 Jun 2018, John Peterson wrote: > >> MPI is already initialized in LibMeshInit, so no need to do this >> manually >> unless your real code does MPI communication before LibMeshInit... >> >>> MPI_Finalize(); >> >> MPI_Finalize is called in the LibMeshInit destructor, no need to call it >> manually. > > Just to clarify: there's no need to call MPI_Finalize manually > *unless* you also called MPI_Init manually. If we see that you've > already initialized MPI yourself then we leave you with the > responsibility to finalize yourself too. > --- > Roy |
From: Roy S. <roy...@ic...> - 2018-06-13 17:29:45
|
On Wed, 13 Jun 2018, John Peterson wrote: > MPI is already initialized in LibMeshInit, so no need to do this manually > unless your real code does MPI communication before LibMeshInit... > >> MPI_Finalize(); > > MPI_Finalize is called in the LibMeshInit destructor, no need to call it > manually. Just to clarify: there's no need to call MPI_Finalize manually *unless* you also called MPI_Init manually. If we see that you've already initialized MPI yourself then we leave you with the responsibility to finalize yourself too. --- Roy |
From: Michael P. <mpo...@pu...> - 2018-06-13 17:03:11
|
Hello John, I played with it, and this is what worked for me. #include <vector> #include <sstream> using namespace std; #define private public//to get access to libMesh::LibMeshInit._vtk_mpi_controller #include "libmesh/mesh_generation.h" #include "libmesh/mesh.h" #include "libmesh/vtk_io.h" #include "libmesh/libmesh.h" #include "vtkMPIController.h" int main(int argc, char ** argv) { MPI_Init(&argc, &argv); { libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); vtkMPICommunicator* mcomm = vtkMPICommunicator::GetWorldCommunicator()->NewInstance (); mcomm->SplitInitialize (mcomm, rank, 0); init._vtk_mpi_controller->SetCommunicator (mcomm); libMesh::Mesh mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); libMesh::MeshTools::Generation::build_cube (mesh, 10, 10, 5, 0.0, 2.0, 0.0, 3.0, 0.0, 4.0,libMesh::HEX8); if (rank == 0) { libMesh::VTKIO out(mesh); out.write("mesh1.pvtu"); } } MPI_Finalize(); return 0; } What do you think? Can you provide an access to libMesh::LibMeshInit._vtk_mpi_controller? Or there is a better solution? Michael. On 06/13/2018 12:28 PM, John Peterson wrote: > > > On Wed, Jun 13, 2018 at 10:14 AM, Michael Povolotskyi > <mpo...@pu... <mailto:mpo...@pu...>> wrote: > > Thank you John, > > I do have some MPI code before LibMeshInit in my real application. > > Also, If I do not call MPI_Finalize at the very end I'm getting > warning messages from MPI. > > Thank you for you explanation with .pvtu, I have changed my code > to libMesh::VTKIO out(mesh); out.write("mesh1.pvtu"); and this > worked. > > Do you have any suggestions for the Problem #2 that I reported? > > It seems to me that the problem is that the communicator of the > Mesh object is smaller than the communicator of the VTK. This > causes a problem for me. > > > Hmm, I don't think we have considered this use case in the past, the > vtkMPIController is initialized in libmesh.C, and it must be assuming > MPI_COMM_WORLD... > > _vtk_mpi_controller = vtkMPIController::New(); > _vtk_mpi_controller->Initialize(&argc, const_cast<char ***>(&argv), > /*initialized_externally=*/1); > _vtk_mpi_controller->SetGlobalController(_vtk_mpi_controller); > > In order for the parallel VTK writer to work, the Mesh probably > therefore also needs to be using MPI_COMM_WORLD. > > -- > John |
From: Michael P. <mpo...@pu...> - 2018-06-13 16:49:33
|
Thank you John, I do have some MPI code before LibMeshInit in my real application. Also, If I do not call MPI_Finalize at the very end I'm getting warning messages from MPI. Thank you for you explanation with .pvtu, I have changed my code to libMesh::VTKIO out(mesh); out.write("mesh1.pvtu"); and this worked. Do you have any suggestions for the Problem #2 that I reported? It seems to me that the problem is that the communicator of the Mesh object is smaller than the communicator of the VTK. This causes a problem for me. Thank you, Michael. On 06/13/2018 12:01 PM, John Peterson wrote: > > > On Tue, Jun 12, 2018 at 7:48 PM, Povolotskyi, Mykhailo > <mpo...@pu... <mailto:mpo...@pu...>> wrote: > > Dear Libmesh develepers, > > I want to create different instances of mesh on several MPI > processes. Then I want to output the mesh from one MPI process. > > > I am facing problems with the following code: > > > #include "libmesh/libmesh.h" > #include "libmesh/mesh_generation.h" > #include "libmesh/mesh.h" > int main(int argc, char ** argv) > { > > MPI_Init(&argc, &argv); > > > MPI is already initialized in LibMeshInit, so no need to do this > manually unless your real code does MPI communication before > LibMeshInit... > > { > libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); > libMesh::Mesh > mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); > libMesh::MeshTools::Generation::build_cube (mesh, > 10, 10, 5, 0.0, > 2.0, 0.0, 3.0, 0.0, 4.0,libMesh::HEX8); > > int rank; > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > > > Again, I'd just use init.comm().rank() to find out the rank. > > > if (rank == 0) > { > mesh.write("mesh1.vtu"); > } > } > MPI_Finalize(); > > > MPI_Finalize is called in the LibMeshInit destructor, no need to call > it manually. > > return 0; > } > > > > The problems are as follows: > > Problem #1) If I run the code in serial, I'm getting warning: > > Warning: This MeshOutput subclass only supports meshes which have > been serialized! > Warning: This MeshOutput subclass only supports meshes which have > been serialized! > The .pvtu extension should be used when writing VTK files in libMesh. > > > My question: > > a) how to avoid the first waning about the mesh not being serialized? > > > Unless you have configured libmesh with --enable-parmesh (and > therefore Mesh == DistributedMesh) this warning can be safely ignored. > > > b) I tried to change the filename to mesh1.pvtu > > In this case I'm getting an error message: > > > > ERROR: Unrecognized file extension: mesh1.pvtu > I understand the following: > > > This is just a (possibly unnecessary) limitation of the NamebasedIO > class, it should work (as in misc_ex11 and misc_ex4 if you explicitly > construct a VTKIO object and then call the write() method. > > -- > John |
From: Michael P. <mpo...@pu...> - 2018-06-13 16:35:10
|
Hi John, It would be absolutely impossible for me to use Mesh on MPI_COMM_WORLD. Do you thing, will it possible either to reset the vtkMPIController or to split his communicator? Michael. On 06/13/2018 12:28 PM, John Peterson wrote: > > > On Wed, Jun 13, 2018 at 10:14 AM, Michael Povolotskyi > <mpo...@pu... <mailto:mpo...@pu...>> wrote: > > Thank you John, > > I do have some MPI code before LibMeshInit in my real application. > > Also, If I do not call MPI_Finalize at the very end I'm getting > warning messages from MPI. > > Thank you for you explanation with .pvtu, I have changed my code > to libMesh::VTKIO out(mesh); out.write("mesh1.pvtu"); and this > worked. > > Do you have any suggestions for the Problem #2 that I reported? > > It seems to me that the problem is that the communicator of the > Mesh object is smaller than the communicator of the VTK. This > causes a problem for me. > > > Hmm, I don't think we have considered this use case in the past, the > vtkMPIController is initialized in libmesh.C, and it must be assuming > MPI_COMM_WORLD... > > _vtk_mpi_controller = vtkMPIController::New(); > _vtk_mpi_controller->Initialize(&argc, const_cast<char ***>(&argv), > /*initialized_externally=*/1); > _vtk_mpi_controller->SetGlobalController(_vtk_mpi_controller); > > In order for the parallel VTK writer to work, the Mesh probably > therefore also needs to be using MPI_COMM_WORLD. > > -- > John |
From: John P. <jwp...@gm...> - 2018-06-13 16:28:39
|
On Wed, Jun 13, 2018 at 10:14 AM, Michael Povolotskyi <mpo...@pu...> wrote: > Thank you John, > > I do have some MPI code before LibMeshInit in my real application. > > Also, If I do not call MPI_Finalize at the very end I'm getting warning > messages from MPI. > > Thank you for you explanation with .pvtu, I have changed my code to > libMesh::VTKIO out(mesh); out.write("mesh1.pvtu"); and this worked. > > Do you have any suggestions for the Problem #2 that I reported? > > It seems to me that the problem is that the communicator of the Mesh > object is smaller than the communicator of the VTK. This causes a problem > for me. > Hmm, I don't think we have considered this use case in the past, the vtkMPIController is initialized in libmesh.C, and it must be assuming MPI_COMM_WORLD... _vtk_mpi_controller = vtkMPIController::New(); _vtk_mpi_controller->Initialize(&argc, const_cast<char ***>(&argv), /*initialized_externally=*/1); _vtk_mpi_controller->SetGlobalController(_vtk_mpi_controller); In order for the parallel VTK writer to work, the Mesh probably therefore also needs to be using MPI_COMM_WORLD. -- John |
From: John P. <jwp...@gm...> - 2018-06-13 16:01:54
|
On Tue, Jun 12, 2018 at 7:48 PM, Povolotskyi, Mykhailo <mpo...@pu...> wrote: > Dear Libmesh develepers, > > I want to create different instances of mesh on several MPI processes. > Then I want to output the mesh from one MPI process. > > > I am facing problems with the following code: > > > #include "libmesh/libmesh.h" > #include "libmesh/mesh_generation.h" > #include "libmesh/mesh.h" > int main(int argc, char ** argv) > { > > MPI_Init(&argc, &argv); > MPI is already initialized in LibMeshInit, so no need to do this manually unless your real code does MPI communication before LibMeshInit... > { > libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); > libMesh::Mesh mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); > libMesh::MeshTools::Generation::build_cube (mesh, > 10, 10, 5, 0.0, 2.0, 0.0, > 3.0, 0.0, 4.0,libMesh::HEX8); > > int rank; > MPI_Comm_rank(MPI_COMM_WORLD, &rank); > Again, I'd just use init.comm().rank() to find out the rank. > > if (rank == 0) > { > mesh.write("mesh1.vtu"); > } > } > MPI_Finalize(); > MPI_Finalize is called in the LibMeshInit destructor, no need to call it manually. > return 0; > } > The problems are as follows: > > Problem #1) If I run the code in serial, I'm getting warning: > > Warning: This MeshOutput subclass only supports meshes which have been > serialized! > Warning: This MeshOutput subclass only supports meshes which have been > serialized! > The .pvtu extension should be used when writing VTK files in libMesh. > > > My question: > > a) how to avoid the first waning about the mesh not being serialized? > Unless you have configured libmesh with --enable-parmesh (and therefore Mesh == DistributedMesh) this warning can be safely ignored. > > b) I tried to change the filename to mesh1.pvtu > > In this case I'm getting an error message: > > > > ERROR: Unrecognized file extension: mesh1.pvtu > I understand the following: > This is just a (possibly unnecessary) limitation of the NamebasedIO class, it should work (as in misc_ex11 and misc_ex4 if you explicitly construct a VTKIO object and then call the write() method. -- John |
From: Povolotskyi, M. <mpo...@pu...> - 2018-06-13 01:49:04
|
Dear Libmesh develepers, I want to create different instances of mesh on several MPI processes. Then I want to output the mesh from one MPI process. I am facing problems with the following code: #include "libmesh/libmesh.h" #include "libmesh/mesh_generation.h" #include "libmesh/mesh.h" int main(int argc, char ** argv) { MPI_Init(&argc, &argv); { libMesh::LibMeshInit init (argc, argv,MPI_COMM_WORLD); libMesh::Mesh mesh(libMesh::Parallel::Communicator(MPI_COMM_SELF)); libMesh::MeshTools::Generation::build_cube (mesh, 10, 10, 5, 0.0, 2.0, 0.0, 3.0, 0.0, 4.0,libMesh::HEX8); int rank; MPI_Comm_rank(MPI_COMM_WORLD, &rank); if (rank == 0) { mesh.write("mesh1.vtu"); } } MPI_Finalize(); return 0; } The problems are as follows: Problem #1) If I run the code in serial, I'm getting warning: Warning: This MeshOutput subclass only supports meshes which have been serialized! Warning: This MeshOutput subclass only supports meshes which have been serialized! The .pvtu extension should be used when writing VTK files in libMesh. My question: a) how to avoid the first waning about the mesh not being serialized? b) I tried to change the filename to mesh1.pvtu In this case I'm getting an error message: ERROR: Unrecognized file extension: mesh1.pvtu I understand the following: *.dat -- Tecplot ASCII file *.e -- Sandia's ExodusII format *.exd -- Sandia's ExodusII format *.fro -- ACDL's surface triangulation file *.gmv -- LANL's GMV (General Mesh Viewer) format *.mesh -- MEdit mesh format *.mgf -- MGF binary mesh format *.msh -- GMSH ASCII file *.n -- Sandia's Nemesis format *.nem -- Sandia's Nemesis format *.plt -- Tecplot binary file *.poly -- TetGen ASCII file *.ucd -- AVS's ASCII UCD format *.ugrid -- Kelly's DIVA ASCII format *.unv -- I-deas Universal format *.vtu -- VTK (paraview-readable) format *.xda -- libMesh ASCII format *.xdr -- libMesh binary format, Exiting without writing output Problem #2) If I run the code in serial, namely mpiexec -n 2 a.out, then the program gets frozen. It is frozen in the output. Thank you for your help, Michael. |
From: Vinicius C. R. <vin...@co...> - 2018-06-12 03:31:59
|
Hi John, I pasted a somewhat smaller version of my code below. By playing with the refinement steps limit and the commented out init() and reinit() lines you should be able to reproduce the two exceptions I got. I hope this clarifies. Thanks for your time! //----------------------------------------------------------------------------------------------------------------------------------------------------- #include <iostream> #include <vector> #include <cmath> #include "libmesh/elem.h" #include "libmesh/equation_systems.h" #include "libmesh/linear_implicit_system.h" #include "libmesh/mesh.h" #include "libmesh/mesh_refinement.h" #include "libmesh/point.h" #include "libmesh/system_subset_by_subdomain.h" #include "libmesh/vtk_io.h" using namespace libMesh; using namespace std; class DummyAssembler : public System::Assembly { public: void assemble() { std::cout << "assembler placeholder" << std::endl; }; }; enum subdomain_type { IN_DOMAIN = 1, SURROGATE = 2, BETWEEN = 3, OUTSIDE = 4 }; int main(int argc, char ** argv) { LibMeshInit init(argc, argv); Mesh mesh(init.comm()); mesh.read("mesh.msh"); EquationSystems equation_systems(mesh); equation_systems.add_system<LinearImplicitSystem>("MySystem"); LinearImplicitSystem *system = &(equation_systems.get_system<LinearImplicitSystem>("MySystem")); DummyAssembler dummyAssembler; system->attach_assemble_object(dummyAssembler); const set<subdomain_id_type> id_list{subdomain_type::IN_DOMAIN, subdomain_type::SURROGATE}; system->add_variable("u", FIRST, LAGRANGE, &id_list); { MeshBase::element_iterator el = mesh.active_local_elements_begin(); const MeshBase::element_iterator elEnd = mesh.active_local_elements_end(); for (; el != elEnd; ++el) { (*el)->subdomain_id() = subdomain_type::BETWEEN; } } // equation_systems.init(); Point center(0., 0., 0.); Real radius = 1.; size_t refinementSteps = 1; while (refinementSteps < 2) { vector<bool> is_included(mesh.max_node_id(), true); // Checks if node lies inside or outside domain { MeshBase::node_iterator node = mesh.active_nodes_begin(); const MeshBase::node_iterator end_node = mesh.active_nodes_end(); bool inside_circle; size_t i = 0; for (; node != end_node; ++node) { inside_circle = ((Point) **node - center).norm() < radius; is_included[i] = is_included[i] && inside_circle; ++i; } } // Classifies subdomains ('between' is the one crossed by the boundary) { MeshBase::element_iterator el = mesh.active_local_subdomain_elements_begin(subdomain_type::BETWEEN); const MeshBase::element_iterator elEnd = mesh.active_local_subdomain_elements_end(subdomain_type::BETWEEN); unsigned count; for (; el != elEnd; ++el) { count = 0; for (size_t i = 0; i < (*el)->n_nodes(); ++i) { count = count + is_included[(*el)->get_node(i)->id()]; } if (count == (*el)->n_nodes()) { (*el)->subdomain_id() = subdomain_type::IN_DOMAIN; } else if (count == 0) { (*el)->subdomain_id() = subdomain_type::OUTSIDE; } else { (*el)->subdomain_id() = subdomain_type::BETWEEN; } } } // Refine boundary intersection { MeshRefinement meshRefinement(mesh); MeshBase::element_iterator el = mesh.active_local_subdomain_elements_begin(subdomain_type::BETWEEN); const MeshBase::element_iterator elEnd = mesh.active_local_subdomain_elements_end(subdomain_type::BETWEEN); for (; el != elEnd; ++el) { (*el)->set_refinement_flag(Elem::REFINE); } meshRefinement.refine_elements(); } // equation_systems.reinit(); ++refinementSteps; } // final subdomain evaluation { MeshBase::element_iterator el = mesh.active_local_subdomain_elements_begin(subdomain_type::IN_DOMAIN); const MeshBase::element_iterator elEnd = mesh.active_local_subdomain_elements_end(subdomain_type::IN_DOMAIN); bool surrogate; for (; el != elEnd; ++el) { surrogate = false; for (unsigned i = 0; i < (*el)->n_sides(); ++i) { surrogate = surrogate || (*el)->neighbor_ptr(i)->subdomain_id() == subdomain_type::BETWEEN; } if (surrogate) { (*el)->subdomain_id() = subdomain_type::SURROGATE; } } } equation_systems.init(); SystemSubsetBySubdomain::SubdomainSelectionByList selection(id_list); SystemSubsetBySubdomain subset(*system, selection); system->restrict_solve_to(&subset, SUBSET_ZERO); VTKIO vtk_io(mesh); vtk_io.write_equation_systems("out.pvtu", equation_systems); return 0; } //----------------------------------------------------------------------------------------------------------------------------------------------------- 2018-06-11 12:33 GMT-03:00 John Peterson <jwp...@gm...>: > > > On Sun, Jun 10, 2018 at 10:02 PM, Vinicius C. Reis < > vin...@co...> wrote: > >> Hi again, it sounds possible that the issue is the hanging nodes. if I >> only >> do one refinement step no error occurs. > > > Yes, but presumably there are still hanging nodes even after one level of > refinement is performed, unless you are referring to one *uniform* > refinement step... > > >> Adding another step, same >> exception. Since I am hard flagging the elements to be refined, quite a >> few >> hanging nodes appear already in the first refinement step. I tried calling >> init() before start refining and reinit() at the end of each refinement >> step, but another error occurred (this one I couldn't check the stack, the >> debugger didn't manage to stop before finishing the execution). >> > > OK, this is what I was going to suggest (calling reinit() between each > round of refinement), and it should work. It's possible to you are still > doing something in your hard flagging step that we are not set up to > handle, a stack track would be helpful, as would sharing the actual code > you've written. > > > >> I am not worried if the refinement is propagated through some of the non >> selected elements, I was actually expecting this to happen. Is there a way >> to detect hanging nodes so I can set the neighbor elements to be refined >> as >> well? > > > Refining neighbors doesn't get rid of hanging nodes, it just makes a > hanging node with some other neighbor. You can only eliminate hanging nodes > by doing uniform refinement... > > -- > John > |
From: Griffith, B. E. <bo...@em...> - 2018-06-12 02:01:26
|
On Jun 11, 2018, at 9:23 PM, John Peterson <jwp...@gm...<mailto:jwp...@gm...>> wrote: On Mon, Jun 11, 2018 at 10:16 AM, Griffith, Boyce Eugene <bo...@em...<mailto:bo...@em...>> wrote: Folks -- I want to play around with an application-specific mesh partitioning that will change during the course of a time-dependent simulation. (Basically, I want to experiment with using the partitioning of a different parallel data structure to drive the mesh partitioning.) I think that the main thing that I need to do is to implement _do_partition(). (I will also implement _do_repartition() once the basic _do_partition() is working.) I was wondering what else needs to be done so that distributed System data are redistributed. Does this happen automatically when calling Partitioner::repartition(), or is there more to it than that? I think the System data should be handled by System::reinit() which is be called by EquationSystems::reinit(). OK, so something like: partiontiner->repartition(); equation_systems->reinit(); and then continue? -- John |
From: John P. <jwp...@gm...> - 2018-06-12 01:23:57
|
On Mon, Jun 11, 2018 at 10:16 AM, Griffith, Boyce Eugene < bo...@em...> wrote: > Folks -- > > I want to play around with an application-specific mesh partitioning that > will change during the course of a time-dependent simulation. (Basically, > I want to experiment with using the partitioning of a different parallel > data structure to drive the mesh partitioning.) > > I think that the main thing that I need to do is to implement > _do_partition(). (I will also implement _do_repartition() once the basic > _do_partition() is working.) I was wondering what else needs to be done so > that distributed System data are redistributed. Does this happen > automatically when calling Partitioner::repartition(), or is there more to > it than that? > I think the System data should be handled by System::reinit() which is be called by EquationSystems::reinit(). -- John |
From: Griffith, B. E. <bo...@em...> - 2018-06-11 19:48:30
|
Folks -- I want to play around with an application-specific mesh partitioning that will change during the course of a time-dependent simulation. (Basically, I want to experiment with using the partitioning of a different parallel data structure to drive the mesh partitioning.) I think that the main thing that I need to do is to implement _do_partition(). (I will also implement _do_repartition() once the basic _do_partition() is working.) I was wondering what else needs to be done so that distributed System data are redistributed. Does this happen automatically when calling Partitioner::repartition(), or is there more to it than that? Thanks! -- Boyce |
From: John P. <jwp...@gm...> - 2018-06-11 15:34:23
|
On Sun, Jun 10, 2018 at 10:02 PM, Vinicius C. Reis <vin...@co... > wrote: > Hi again, it sounds possible that the issue is the hanging nodes. if I only > do one refinement step no error occurs. Yes, but presumably there are still hanging nodes even after one level of refinement is performed, unless you are referring to one *uniform* refinement step... > Adding another step, same > exception. Since I am hard flagging the elements to be refined, quite a few > hanging nodes appear already in the first refinement step. I tried calling > init() before start refining and reinit() at the end of each refinement > step, but another error occurred (this one I couldn't check the stack, the > debugger didn't manage to stop before finishing the execution). > OK, this is what I was going to suggest (calling reinit() between each round of refinement), and it should work. It's possible to you are still doing something in your hard flagging step that we are not set up to handle, a stack track would be helpful, as would sharing the actual code you've written. > I am not worried if the refinement is propagated through some of the non > selected elements, I was actually expecting this to happen. Is there a way > to detect hanging nodes so I can set the neighbor elements to be refined as > well? Refining neighbors doesn't get rid of hanging nodes, it just makes a hanging node with some other neighbor. You can only eliminate hanging nodes by doing uniform refinement... -- John |
From: Vinicius C. R. <vin...@co...> - 2018-06-11 04:02:35
|
Hi again, it sounds possible that the issue is the hanging nodes. if I only do one refinement step no error occurs. Adding another step, same exception. Since I am hard flagging the elements to be refined, quite a few hanging nodes appear already in the first refinement step. I tried calling init() before start refining and reinit() at the end of each refinement step, but another error occurred (this one I couldn't check the stack, the debugger didn't manage to stop before finishing the execution). I am not worried if the refinement is propagated through some of the non selected elements, I was actually expecting this to happen. Is there a way to detect hanging nodes so I can set the neighbor elements to be refined as well? Another option I can think is implementing an ElementFlagging object, but if I use the refine element flag in It I imagine I would fall into the same problem I was trying to avoid in the first place. 2018-06-05 16:29 GMT-03:00 John Peterson <jwp...@gm...>: > > > On Tue, Jun 5, 2018 at 1:04 PM, Vinicius C. Reis <vin...@co... > > wrote: > >> Sorry about that, I should have imagined... Well, exception and stack >> trace >> from a fresh run in the text dump below. I managed to reproduce it in a >> quite smaller code, what would be the best way to share, paste it into an >> email or through git repository? >> > > libMesh::System::reinit_constraints (this=0xc26700) >> libMesh::System::init_data (this=0xc26700) >> libMesh::ExplicitSystem::init_data (this=0xc26700) >> libMesh::ImplicitSystem::init_data (this=0xc26700) >> libMesh::LinearImplicitSystem::init_data (this=0xc26700) >> libMesh::System::init (this=0xc26700) >> libMesh::EquationSystems::init (this=0x7fffffffd800) >> main (argc=1, argv=0x7fffffffdfe8) > > > Hmm... it's likely the problem is related to whatever you are doing to the > Mesh before you call init(). I don't think you can have refinement/hanging > nodes in the Mesh before calling init(), for example, but I could be wrong > about this. > > -- > John > |
From: John P. <jwp...@gm...> - 2018-06-08 19:58:18
|
On Fri, Jun 8, 2018 at 1:40 PM, Amneet Bhalla <mai...@gm...> wrote: > Cool, thanks! It worked. > > Just to be sure the element type would be TRI3 for MatlabIO? Is there any > libMesh internal magic that can make it TRI6? > Sure, after reading it in you can call mesh.all_second_order(); to convert to TRI6. -- John |
From: Amneet B. <mai...@gm...> - 2018-06-08 19:41:27
|
Cool, thanks! It worked. Just to be sure the element type would be TRI3 for MatlabIO? Is there any libMesh internal magic that can make it TRI6? On Fri, Jun 8, 2018 at 12:08 PM John Peterson <jwp...@gm...> wrote: > > > On Fri, Jun 8, 2018 at 12:53 PM, Amneet Bhalla <mai...@gm...> > wrote: > >> Nevermind. I read the actual comment in ./include/mesh/matlab_io.h file. >> There are indeed new lines. >> >> (Sorry for the noise) >> > > Ah, yeah it looks like these lines need to be in a \verbatim section so > that Doxygen doesn't replace \n characters with actual newlines. > > -- > John > -- --Amneet |