You can subscribe to this list here.
2003 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
(2) |
Oct
(2) |
Nov
(27) |
Dec
(31) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(6) |
Feb
(15) |
Mar
(33) |
Apr
(10) |
May
(46) |
Jun
(11) |
Jul
(21) |
Aug
(15) |
Sep
(13) |
Oct
(23) |
Nov
(1) |
Dec
(8) |
2005 |
Jan
(27) |
Feb
(57) |
Mar
(86) |
Apr
(23) |
May
(37) |
Jun
(34) |
Jul
(24) |
Aug
(17) |
Sep
(50) |
Oct
(24) |
Nov
(10) |
Dec
(60) |
2006 |
Jan
(47) |
Feb
(46) |
Mar
(127) |
Apr
(19) |
May
(26) |
Jun
(62) |
Jul
(47) |
Aug
(51) |
Sep
(61) |
Oct
(42) |
Nov
(50) |
Dec
(33) |
2007 |
Jan
(60) |
Feb
(55) |
Mar
(77) |
Apr
(102) |
May
(82) |
Jun
(102) |
Jul
(169) |
Aug
(117) |
Sep
(80) |
Oct
(37) |
Nov
(51) |
Dec
(43) |
2008 |
Jan
(71) |
Feb
(94) |
Mar
(98) |
Apr
(125) |
May
(54) |
Jun
(119) |
Jul
(60) |
Aug
(111) |
Sep
(118) |
Oct
(125) |
Nov
(119) |
Dec
(94) |
2009 |
Jan
(109) |
Feb
(38) |
Mar
(93) |
Apr
(88) |
May
(29) |
Jun
(57) |
Jul
(53) |
Aug
(48) |
Sep
(68) |
Oct
(151) |
Nov
(23) |
Dec
(35) |
2010 |
Jan
(84) |
Feb
(60) |
Mar
(184) |
Apr
(112) |
May
(60) |
Jun
(90) |
Jul
(23) |
Aug
(70) |
Sep
(119) |
Oct
(27) |
Nov
(47) |
Dec
(54) |
2011 |
Jan
(22) |
Feb
(19) |
Mar
(92) |
Apr
(93) |
May
(35) |
Jun
(91) |
Jul
(32) |
Aug
(61) |
Sep
(7) |
Oct
(69) |
Nov
(81) |
Dec
(23) |
2012 |
Jan
(64) |
Feb
(95) |
Mar
(35) |
Apr
(36) |
May
(63) |
Jun
(98) |
Jul
(70) |
Aug
(171) |
Sep
(149) |
Oct
(64) |
Nov
(67) |
Dec
(126) |
2013 |
Jan
(108) |
Feb
(104) |
Mar
(171) |
Apr
(133) |
May
(108) |
Jun
(100) |
Jul
(93) |
Aug
(126) |
Sep
(74) |
Oct
(59) |
Nov
(145) |
Dec
(93) |
2014 |
Jan
(38) |
Feb
(45) |
Mar
(26) |
Apr
(41) |
May
(125) |
Jun
(70) |
Jul
(61) |
Aug
(66) |
Sep
(60) |
Oct
(110) |
Nov
(27) |
Dec
(30) |
2015 |
Jan
(43) |
Feb
(67) |
Mar
(71) |
Apr
(92) |
May
(39) |
Jun
(15) |
Jul
(46) |
Aug
(63) |
Sep
(84) |
Oct
(82) |
Nov
(69) |
Dec
(45) |
2016 |
Jan
(92) |
Feb
(91) |
Mar
(148) |
Apr
(43) |
May
(58) |
Jun
(117) |
Jul
(92) |
Aug
(140) |
Sep
(49) |
Oct
(33) |
Nov
(85) |
Dec
(40) |
2017 |
Jan
(41) |
Feb
(36) |
Mar
(49) |
Apr
(41) |
May
(73) |
Jun
(51) |
Jul
(12) |
Aug
(69) |
Sep
(26) |
Oct
(43) |
Nov
(75) |
Dec
(23) |
2018 |
Jan
(86) |
Feb
(36) |
Mar
(50) |
Apr
(28) |
May
(53) |
Jun
(65) |
Jul
(26) |
Aug
(43) |
Sep
(32) |
Oct
(28) |
Nov
(52) |
Dec
(17) |
2019 |
Jan
(39) |
Feb
(26) |
Mar
(71) |
Apr
(30) |
May
(73) |
Jun
(18) |
Jul
(5) |
Aug
(10) |
Sep
(8) |
Oct
(24) |
Nov
(12) |
Dec
(34) |
2020 |
Jan
(17) |
Feb
(10) |
Mar
(6) |
Apr
(4) |
May
(15) |
Jun
(3) |
Jul
(8) |
Aug
(15) |
Sep
(6) |
Oct
(3) |
Nov
|
Dec
(4) |
2021 |
Jan
(4) |
Feb
(4) |
Mar
(21) |
Apr
(14) |
May
(13) |
Jun
(18) |
Jul
(1) |
Aug
(39) |
Sep
(1) |
Oct
|
Nov
(3) |
Dec
|
2022 |
Jan
|
Feb
|
Mar
(2) |
Apr
(8) |
May
|
Jun
|
Jul
|
Aug
(3) |
Sep
|
Oct
(3) |
Nov
|
Dec
|
2023 |
Jan
(2) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
(7) |
Sep
(3) |
Oct
|
Nov
|
Dec
(1) |
From: Zack V. <jan...@gm...> - 2017-11-28 22:02:33
|
Hello again, I have a few related questions, and help is, always, much appreciated! I am using both pygmsh and meshpy to generate meshes and read them in, but I'll focus on the former, pygmsh. *Question 1* For pygmsh, I can generate and read in (into libmesh) order-1 triangular (and, I think, tetrahedral) meshes just fine, but when I generate order-2 elements, I receive an error that refers to an unrecognized type 'EDGE' I've attached a small example of the output (points, cells and cell_data) in pygmsh for a disk, which I save in exodus ('.e') format (using meshpy's write function). But in summary: order-1 triangle elements: 'cells' is a dictionary with only 1 key, 'triangle' order-2 triangle elements: cells is a dictionary with keys: cells['line3'] cells['triangle6'] cells['vertex'] Do I just need to change the key names to say, 'Edge3'? *Question 2* For the cases above, I noticed that I am no longer running libmesh in parallel when I read in the mesh. I found, and tried to use libMesh::Partitioner part(); // Partitioner()::partition_unpartitioned_elements(mesh); // Partitioner.partition_unpartitioned_elements(mesh); but I am not sure how exactly this works. Also, I am not sure if using this the right approach? *Question 3* This is less important, as I may have sidestepped this for now, but I'm sure it will return with a vengeance. I looked through the forums to find out how VTKIO::write_nodal_data works, and from there looked into the source for VTKIO::write_equation_system to see if I could figure out how the former is used in the latter, but to no avail. I'm writing using exo_io.write_timestep for now, but my screen (rightly?) reprimands me for doing so. I have a sense that this will involve a host of other functions (allgather?) so I may send this as a separate email question. |
From: Roy S. <roy...@ic...> - 2017-11-27 21:23:00
|
So sorry nobody got back to you on this already! On Tue, 14 Nov 2017, Michael Povolotskyi wrote: > I would like to output a nodal data stored in NumericVector, using > > void libMesh::MeshOutput< MT >::write_nodal_data ( const std::string & > fname, > const NumericVector< Number > & parallel_soln, > const std::vector< std::string > & names > ) > > I need to fill NumericVector with values. Well, it needs to be filled with values somehow. Typically those values come from an interpolation/projection of some function or from a solution of some system of equations, rather than being set by hand. > As far as I understand, I need to provide data both for active and non-active > nodes, because the NumericVector contains entities for all local nodes. Oh, it's much more complicated than that: In multiphysics problems, you must have set values for each degree of freedom (DoF) of each variable, and there will therefore in general be multiple variable DoFs per node. In parallel, each processor must have set values for its own local DoFs. On non-Lagrange elements (e.g. hierarchic or discontinuous) or on Lagrange non-isoparametric elements (e.g. a bilinear function on a QUAD9) the common belief that one node == one DoF is false. You may have nodes where a variable has no DoFs (e.g. the mid-edge nodes in that bilinear-on-QUAD9 case), you may have nodes where a variable has multiple DoFs (e.g. the mid-edge of nodes in the hierarchic case for cubic and higher polynomial degree), and you may have DoFs stored on elements where there is no node (e.g. for "bubble function" coefficients for cubic and higher polynomial degree). If you have a function you're trying to discretize, I suggest using System::project_vector() to do the discretization for you. If you insist on doing the interpolation yourself (which isn't completely insane, if you know you're doing Lagrange FE only and you want to be as fast as possible), then the method you want to use to look up DoF indices is DofObject::dof_number(). Give it a system number, variable number, and component number (the latter is always 0 for Lagrange FE on nodes where those FE have a DoF) and it will give you the corresponding DoF index. > 1. Will I get correct order if I iterate from mesh.local_nodes_begin() to > mesh.local_nodes_end() ? Maybe, if the phase of the moon is correct? Even if the answer happens to be yes, definitely don't count on it; it's not a behavior we guarantee. We do guarantee that this iteration will give you the correct *set* of nodes, though, in parallel - local DoFs are stored on and only on local nodes. > 2. Will the values I set for non active nodes matter for the output? Apparently we used to have such a thing as non-active nodes? Today I Learned. These days we strip nodes from the mesh as soon as they're no longer used by any element (if you actually *want* a disconnected node then you create a "NodeElem" to connect it to), so you should never encounter one. --- Roy |
From: John P. <jwp...@gm...> - 2017-11-27 15:33:42
|
On Tue, Nov 21, 2017 at 4:47 PM, Alexander Lindsay <ale...@gm... > wrote: > All, > > I get the following errors when running valgrind in parallel on a MOOSE > test case. I could probably figure this out with time, but if someone > smarter than I could quickly deduce whether these are false positives or > something real then I would definitely not complain. (I've trimmed the > output to 1 process; the errors for every other process were the same). > I think I would probably blame this on something in the MPI implementation? It may be unavoidable, but it looks like the total leak is not very big: ==25211== definitely lost: 2,928 bytes in 13 blocks I wouldn't lose too much sleep over it... -- John |
From: Roy S. <roy...@ic...> - 2017-11-26 21:45:33
|
On Sun, 26 Nov 2017, Zack Vitoh wrote: > Sorry about that, I figured it out, I'll post the solution in a bit. I'm waiting anxiously! I added the configuration tests myself to auto-detect Debian/Ubuntu/etc PETSC installation, years ago, but I stopped using it regularly shortly afterward, and I only recently tried it out with a new laptop and discovered it wasn't working for me... --- Roy > On Sat, Nov 25, 2017 at 12:44 PM, Zack Vitoh <jan...@gm...> wrote: > >> Hello again, >> >> Sorry to be somewhat needy, but I am using Ubuntu, and configuring with >> Petsc. I specified PETSC_DIR and PETSC_ARCH, but after configuring, petsc, >> triangle, tetgen are not found? >> >> For completeness, here are the contents of my do_configure file. I (chmod >> +x)'ed this and run, from ~/src: >> sudo git clone (libmesh git location) >> cd libmesh >> sudo ./do_configure >> >> And the resulting summary is: >> >> >> ----------------------------------- SUMMARY ------------------------------ >> ----- >> >> Package version.................... : libmesh-1.3.0-pre >> >> C++ compiler type.................. : gcc5 >> C++ compiler....................... : mpicxx >> C compiler......................... : mpicc >> Fortran compiler................... : mpif90 >> Build Methods...................... : dbg devel opt >> >> CPPFLAGS...(dbg)................... : -DDEBUG -D_GLIBCXX_DEBUG >> -D_GLIBCXX_DEBUG_PEDANTIC >> CXXFLAGS...(dbg)................... : -std=gnu++11 -O0 >> -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused >> -Wpointer-arith -Wformat -Wparentheses -Woverloaded-virtual >> -Wno-variadic-macros -fopenmp -std=gnu++11 >> CFLAGS.....(dbg)................... : -g -Wimplicit -fopenmp >> >> CPPFLAGS...(devel)................. : >> CXXFLAGS...(devel)................. : -std=gnu++11 -O2 >> -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused >> -Wpointer-arith -Wformat -Wparentheses -Wuninitialized -funroll-loops >> -fstrict-aliasing -Woverloaded-virtual -Wdisabled-optimization >> -Wno-variadic-macros -fopenmp -std=gnu++11 >> CFLAGS.....(devel)................. : -O2 -g -Wimplicit -funroll-loops >> -fstrict-aliasing -fopenmp >> >> CPPFLAGS...(opt)................... : -DNDEBUG >> CXXFLAGS...(opt)................... : -std=gnu++11 -O2 >> -felide-constructors -funroll-loops -fstrict-aliasing >> -Wdisabled-optimization -Wno-variadic-macros -fopenmp -std=gnu++11 >> CFLAGS.....(opt)................... : -O2 -funroll-loops >> -fstrict-aliasing -fopenmp >> >> Install dir........................ : /opt/libmesh-master >> Build user......................... : root >> Build host......................... : Harmony >> Build architecture................. : x86_64-unknown-linux-gnu >> Git revision....................... : cf5616c3908f9f221b25b469b30aad >> 9d279511a7 >> >> Library Features: >> library warnings................. : yes >> library deprecated code support.. : yes >> adaptive mesh refinement......... : yes >> blocked matrix/vector storage.... : no >> complex variables................ : yes >> example suite.................... : yes >> ghosted vectors.................. : yes >> high-order shape functions....... : yes >> unique-id support................ : yes >> id size (boundaries)............. : 2 bytes >> id size (dofs)................... : 4 bytes >> id size (unique)................. : 8 bytes >> id size (processors)............. : 2 bytes >> id size (subdomains)............. : 2 bytes >> infinite elements................ : yes >> Dirichlet constraints............ : yes >> node constraints................. : yes >> parallel mesh.................... : yes >> performance logging.............. : yes >> periodic boundary conditions..... : yes >> reference counting............... : yes >> shape function 2nd derivatives... : yes >> stack trace files................ : yes >> track node valence............... : yes >> variational smoother............. : yes >> xdr binary I/O................... : yes >> >> Optional Packages: >> boost............................ : yes >> capnproto........................ : no >> cppunit.......................... : no >> curl............................. : no >> eigen............................ : yes >> exodus........................... : yes >> version....................... : v5.22 >> fparser.......................... : yes >> build from version............ : release >> glpk............................. : no >> gmv.............................. : yes >> gzstream......................... : yes >> hdf5............................. : no >> laspack.......................... : no >> libhilbert....................... : yes >> metis............................ : yes >> mpi.............................. : yes >> nanoflann........................ : yes >> nemesis.......................... : yes >> version....................... : v5.22 >> netcdf........................... : yes >> version....................... : 4 >> nlopt............................ : no >> parmetis......................... : yes >> petsc............................ : no >> qhull............................ : yes >> sfcurves......................... : no >> slepc............................ : no >> thread model..................... : pthread >> c++ rtti ........................ : yes >> tecio............................ : yes >> tecplot...(vendor binaries)...... : no >> tetgen........................... : no >> triangle......................... : no >> trilinos......................... : yes >> AztecOO....................... : >> NOX........................... : >> ML............................ : >> Tpetra........................ : >> DTK........................... : >> Ifpack........................ : >> Epetra........................ : >> EpetraExt..................... : >> vtk.............................. : yes >> version....................... : 6.2.0 >> >> libmesh_optional_INCLUDES........ : -I/usr/include/eigen3 >> -I/usr/include/vtk-6.2 -I/usr/include/mpi -I/usr/include >> >> libmesh_optional_LIBS............ : -lvtkIOCore-6.2 -lvtkCommonCore-6.2 >> -lvtkCommonDataModel-6.2 -lvtkFiltersCore-6.2 -lvtkIOXML-6.2 >> -lvtkImagingCore-6.2 -lvtkIOImage-6.2 -lvtkImagingMath-6.2 >> -lvtkIOParallelXML-6.2 -lvtkParallelMPI-6.2 -lvtkParallelCore-6.2 -lz -lmpi >> -L/usr/lib/vtk-6.2 -L/usr/lib >> >> ------------------------------------------------------------ >> ------------------- >> Configure complete, now type 'make' and then 'make install'. >> >> I actually don't recall installing trilinos, but installed triangle and >> tetgen with synaptic package manager. I specified the tetgen include and >> header, so expected these to be found, but they were not? >> >> I spoke with others who specified PETSC_DIR and PETSC_ARCH, and so used >> their process, but mine is not found? >> >> This is somewhat of a basic question, so if anyone could help I'd really >> appreciate it! >> > ------------------------------------------------------------------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > > |
From: Zack V. <jan...@gm...> - 2017-11-26 08:35:08
|
Sorry about that, I figured it out, I'll post the solution in a bit. On Sat, Nov 25, 2017 at 12:44 PM, Zack Vitoh <jan...@gm...> wrote: > Hello again, > > Sorry to be somewhat needy, but I am using Ubuntu, and configuring with > Petsc. I specified PETSC_DIR and PETSC_ARCH, but after configuring, petsc, > triangle, tetgen are not found? > > For completeness, here are the contents of my do_configure file. I (chmod > +x)'ed this and run, from ~/src: > sudo git clone (libmesh git location) > cd libmesh > sudo ./do_configure > > And the resulting summary is: > > > ----------------------------------- SUMMARY ------------------------------ > ----- > > Package version.................... : libmesh-1.3.0-pre > > C++ compiler type.................. : gcc5 > C++ compiler....................... : mpicxx > C compiler......................... : mpicc > Fortran compiler................... : mpif90 > Build Methods...................... : dbg devel opt > > CPPFLAGS...(dbg)................... : -DDEBUG -D_GLIBCXX_DEBUG > -D_GLIBCXX_DEBUG_PEDANTIC > CXXFLAGS...(dbg)................... : -std=gnu++11 -O0 > -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused > -Wpointer-arith -Wformat -Wparentheses -Woverloaded-virtual > -Wno-variadic-macros -fopenmp -std=gnu++11 > CFLAGS.....(dbg)................... : -g -Wimplicit -fopenmp > > CPPFLAGS...(devel)................. : > CXXFLAGS...(devel)................. : -std=gnu++11 -O2 > -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused > -Wpointer-arith -Wformat -Wparentheses -Wuninitialized -funroll-loops > -fstrict-aliasing -Woverloaded-virtual -Wdisabled-optimization > -Wno-variadic-macros -fopenmp -std=gnu++11 > CFLAGS.....(devel)................. : -O2 -g -Wimplicit -funroll-loops > -fstrict-aliasing -fopenmp > > CPPFLAGS...(opt)................... : -DNDEBUG > CXXFLAGS...(opt)................... : -std=gnu++11 -O2 > -felide-constructors -funroll-loops -fstrict-aliasing > -Wdisabled-optimization -Wno-variadic-macros -fopenmp -std=gnu++11 > CFLAGS.....(opt)................... : -O2 -funroll-loops > -fstrict-aliasing -fopenmp > > Install dir........................ : /opt/libmesh-master > Build user......................... : root > Build host......................... : Harmony > Build architecture................. : x86_64-unknown-linux-gnu > Git revision....................... : cf5616c3908f9f221b25b469b30aad > 9d279511a7 > > Library Features: > library warnings................. : yes > library deprecated code support.. : yes > adaptive mesh refinement......... : yes > blocked matrix/vector storage.... : no > complex variables................ : yes > example suite.................... : yes > ghosted vectors.................. : yes > high-order shape functions....... : yes > unique-id support................ : yes > id size (boundaries)............. : 2 bytes > id size (dofs)................... : 4 bytes > id size (unique)................. : 8 bytes > id size (processors)............. : 2 bytes > id size (subdomains)............. : 2 bytes > infinite elements................ : yes > Dirichlet constraints............ : yes > node constraints................. : yes > parallel mesh.................... : yes > performance logging.............. : yes > periodic boundary conditions..... : yes > reference counting............... : yes > shape function 2nd derivatives... : yes > stack trace files................ : yes > track node valence............... : yes > variational smoother............. : yes > xdr binary I/O................... : yes > > Optional Packages: > boost............................ : yes > capnproto........................ : no > cppunit.......................... : no > curl............................. : no > eigen............................ : yes > exodus........................... : yes > version....................... : v5.22 > fparser.......................... : yes > build from version............ : release > glpk............................. : no > gmv.............................. : yes > gzstream......................... : yes > hdf5............................. : no > laspack.......................... : no > libhilbert....................... : yes > metis............................ : yes > mpi.............................. : yes > nanoflann........................ : yes > nemesis.......................... : yes > version....................... : v5.22 > netcdf........................... : yes > version....................... : 4 > nlopt............................ : no > parmetis......................... : yes > petsc............................ : no > qhull............................ : yes > sfcurves......................... : no > slepc............................ : no > thread model..................... : pthread > c++ rtti ........................ : yes > tecio............................ : yes > tecplot...(vendor binaries)...... : no > tetgen........................... : no > triangle......................... : no > trilinos......................... : yes > AztecOO....................... : > NOX........................... : > ML............................ : > Tpetra........................ : > DTK........................... : > Ifpack........................ : > Epetra........................ : > EpetraExt..................... : > vtk.............................. : yes > version....................... : 6.2.0 > > libmesh_optional_INCLUDES........ : -I/usr/include/eigen3 > -I/usr/include/vtk-6.2 -I/usr/include/mpi -I/usr/include > > libmesh_optional_LIBS............ : -lvtkIOCore-6.2 -lvtkCommonCore-6.2 > -lvtkCommonDataModel-6.2 -lvtkFiltersCore-6.2 -lvtkIOXML-6.2 > -lvtkImagingCore-6.2 -lvtkIOImage-6.2 -lvtkImagingMath-6.2 > -lvtkIOParallelXML-6.2 -lvtkParallelMPI-6.2 -lvtkParallelCore-6.2 -lz -lmpi > -L/usr/lib/vtk-6.2 -L/usr/lib > > ------------------------------------------------------------ > ------------------- > Configure complete, now type 'make' and then 'make install'. > > I actually don't recall installing trilinos, but installed triangle and > tetgen with synaptic package manager. I specified the tetgen include and > header, so expected these to be found, but they were not? > > I spoke with others who specified PETSC_DIR and PETSC_ARCH, and so used > their process, but mine is not found? > > This is somewhat of a basic question, so if anyone could help I'd really > appreciate it! > |
From: Zack V. <jan...@gm...> - 2017-11-25 17:44:56
|
Hello again, Sorry to be somewhat needy, but I am using Ubuntu, and configuring with Petsc. I specified PETSC_DIR and PETSC_ARCH, but after configuring, petsc, triangle, tetgen are not found? For completeness, here are the contents of my do_configure file. I (chmod +x)'ed this and run, from ~/src: sudo git clone (libmesh git location) cd libmesh sudo ./do_configure And the resulting summary is: ----------------------------------- SUMMARY ----------------------------------- Package version.................... : libmesh-1.3.0-pre C++ compiler type.................. : gcc5 C++ compiler....................... : mpicxx C compiler......................... : mpicc Fortran compiler................... : mpif90 Build Methods...................... : dbg devel opt CPPFLAGS...(dbg)................... : -DDEBUG -D_GLIBCXX_DEBUG -D_GLIBCXX_DEBUG_PEDANTIC CXXFLAGS...(dbg)................... : -std=gnu++11 -O0 -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused -Wpointer-arith -Wformat -Wparentheses -Woverloaded-virtual -Wno-variadic-macros -fopenmp -std=gnu++11 CFLAGS.....(dbg)................... : -g -Wimplicit -fopenmp CPPFLAGS...(devel)................. : CXXFLAGS...(devel)................. : -std=gnu++11 -O2 -felide-constructors -g -pedantic -W -Wall -Wextra -Wno-long-long -Wunused -Wpointer-arith -Wformat -Wparentheses -Wuninitialized -funroll-loops -fstrict-aliasing -Woverloaded-virtual -Wdisabled-optimization -Wno-variadic-macros -fopenmp -std=gnu++11 CFLAGS.....(devel)................. : -O2 -g -Wimplicit -funroll-loops -fstrict-aliasing -fopenmp CPPFLAGS...(opt)................... : -DNDEBUG CXXFLAGS...(opt)................... : -std=gnu++11 -O2 -felide-constructors -funroll-loops -fstrict-aliasing -Wdisabled-optimization -Wno-variadic-macros -fopenmp -std=gnu++11 CFLAGS.....(opt)................... : -O2 -funroll-loops -fstrict-aliasing -fopenmp Install dir........................ : /opt/libmesh-master Build user......................... : root Build host......................... : Harmony Build architecture................. : x86_64-unknown-linux-gnu Git revision....................... : cf5616c3908f9f221b25b469b30aad9d279511a7 Library Features: library warnings................. : yes library deprecated code support.. : yes adaptive mesh refinement......... : yes blocked matrix/vector storage.... : no complex variables................ : yes example suite.................... : yes ghosted vectors.................. : yes high-order shape functions....... : yes unique-id support................ : yes id size (boundaries)............. : 2 bytes id size (dofs)................... : 4 bytes id size (unique)................. : 8 bytes id size (processors)............. : 2 bytes id size (subdomains)............. : 2 bytes infinite elements................ : yes Dirichlet constraints............ : yes node constraints................. : yes parallel mesh.................... : yes performance logging.............. : yes periodic boundary conditions..... : yes reference counting............... : yes shape function 2nd derivatives... : yes stack trace files................ : yes track node valence............... : yes variational smoother............. : yes xdr binary I/O................... : yes Optional Packages: boost............................ : yes capnproto........................ : no cppunit.......................... : no curl............................. : no eigen............................ : yes exodus........................... : yes version....................... : v5.22 fparser.......................... : yes build from version............ : release glpk............................. : no gmv.............................. : yes gzstream......................... : yes hdf5............................. : no laspack.......................... : no libhilbert....................... : yes metis............................ : yes mpi.............................. : yes nanoflann........................ : yes nemesis.......................... : yes version....................... : v5.22 netcdf........................... : yes version....................... : 4 nlopt............................ : no parmetis......................... : yes petsc............................ : no qhull............................ : yes sfcurves......................... : no slepc............................ : no thread model..................... : pthread c++ rtti ........................ : yes tecio............................ : yes tecplot...(vendor binaries)...... : no tetgen........................... : no triangle......................... : no trilinos......................... : yes AztecOO....................... : NOX........................... : ML............................ : Tpetra........................ : DTK........................... : Ifpack........................ : Epetra........................ : EpetraExt..................... : vtk.............................. : yes version....................... : 6.2.0 libmesh_optional_INCLUDES........ : -I/usr/include/eigen3 -I/usr/include/vtk-6.2 -I/usr/include/mpi -I/usr/include libmesh_optional_LIBS............ : -lvtkIOCore-6.2 -lvtkCommonCore-6.2 -lvtkCommonDataModel-6.2 -lvtkFiltersCore-6.2 -lvtkIOXML-6.2 -lvtkImagingCore-6.2 -lvtkIOImage-6.2 -lvtkImagingMath-6.2 -lvtkIOParallelXML-6.2 -lvtkParallelMPI-6.2 -lvtkParallelCore-6.2 -lz -lmpi -L/usr/lib/vtk-6.2 -L/usr/lib ------------------------------------------------------------------------------- Configure complete, now type 'make' and then 'make install'. I actually don't recall installing trilinos, but installed triangle and tetgen with synaptic package manager. I specified the tetgen include and header, so expected these to be found, but they were not? I spoke with others who specified PETSC_DIR and PETSC_ARCH, and so used their process, but mine is not found? This is somewhat of a basic question, so if anyone could help I'd really appreciate it! |
From: Zack V. <jan...@gm...> - 2017-11-23 00:17:39
|
I'll be happy to keep CC'ing the mailing list, sorry, my finger slipped! I'm afraid I didn't see your extended advice until just now. While I do want to search for other points contained within a horizon about a given point, I was mistaken before, and this procedure actually has little to do with the large tolerance I was setting (if I understand correctly) Thanks for your comment on efficiency. I'll look into these alternative methods. I actually just want to, given a physical point, get the value of the shape functions at that point. Should I proceed as I am, or would it make more sense to use something like http://libmesh.github.io/examples/miscellaneous_ex8.html to interpolate the values of the shape functions over the mesh? This would introduce additional interpolation error, though? On Thu, Nov 16, 2017 at 11:25 AM, Roy Stogner <roy...@ic...> wrote: > > Oh, and now that I've noticed: please keep Cc:ing the mailing list; > the more helpful discussion that gets archived where future users' > search engines can find it, the better. > > Thanks, > --- > Roy > > > On Thu, 16 Nov 2017, Roy Stogner wrote: > > >> On Wed, 15 Nov 2017, Zack Vitoh wrote: >> >> Pure virtual functions (like those found in the PointLocatorBase and >>> related classes) were completely new to me earlier today, but I believe I >>> understand the proper syntax, at least, so if it's of use to anyone >>> else, here is an example of one way to use the PointLocatorBase class (to >>> find >>> the element 'elem_ploc' containing (0,-0.5,0)) >>> >>> UniquePtr<PointLocatorBase> my_point_locator(PointLocatorBase::build(TREE_ELEMENTS, >>> mesh)); >>> >> >> This should work, but it's not the most efficient way to go: >> although PointLocatorTree::operator() is O(log N), >> PointLocatorTree::build() is O(N), so you only get a fast amortized >> lookup if you can reuse the same point locator over and over again. >> >> Try >> >> UniquePtr<PointLocatorBase> my_point_locator = mesh.sub_point_locator(); >> >> That will create a sub-locator which reuses the same main locator >> instead of building a new one each time. >> >> Real mpl_tol = 2.0 * diam; >>> my_point_locator->set_close_to_point_tol (mpl_tol); >>> my_point_locator->enable_out_of_mesh_mode(); >>> >> >> I assume diam is an element diameter? Then you're trying to find >> points as far as two diameters away from any current element? I'm >> afraid that's not guaranteed to work - if you have quads or other >> non-affine elements in your mesh, you can have mapping functions which >> are invertible on the elements (so the mesh is perfectly valid) but >> which become singular far away from the elements (so the >> transformations we do when checking whether an element contains a >> point become invalid). Beware. >> >> Also, with a huge tolerance, you are going to have multiple elements >> which "contain" a point, and the point locator may not return an >> element which *actually* contains the point, even if one exists, if it >> finds a merely close by element first. >> >> const Elem* elem_ploc = my_point_locator->operator()( >>> Point(0.,-0.5,0.) ); >>> >> >> For operator(), terser syntax is: >> >> const Elem* elem_ploc = (*my_point_locator)( Point(0.,-0.5,0.) ); >> >> Knowing the full ugly syntax is still useful, unfortunately, for >> debugging with gdb... >> --- >> Roy > > |
From: Zack V. <jan...@gm...> - 2017-11-22 23:29:18
|
Thanks Roy and Alexander, I appreciate the insight, I will ask those maintaining PyGmsh about the order of elements if I can't seem to find the option readily myself, and then report back here. Disregarding order for now, if I simply run this as it is, is it assuming Tri3 elements and a number of elements corresponding to those given in the mesh? On Wed, Nov 22, 2017 at 4:36 PM, Alexander Lindsay <ale...@gm... > wrote: > I've never used pygmsh, but I know if you're running gmsh from the command > line, passing the option `-order 2` will create second order elements. > > On Wed, Nov 22, 2017 at 2:14 PM, Roy Stogner <roy...@ic...> > wrote: > >> >> On Wed, 22 Nov 2017, Zack Vitoh wrote: >> >> GmshIO gmsh_io(mesh); >>> gmsh_io.read("disk.vtu"); >>> >>> but--and I am not sure if this is possible--imposing additional nodes on >>> each element generated in the triangularization generated by gmsh with >>> nodes such that I have, say, Tri6 elements. >>> >> >> You have Tri3 elements now? After the read calling >> mesh.all_second_order(); >> will turn them into Tri6 elements. >> >> Is this possible through libmesh, or something I would I have to do >>> something in gmsh instead? >>> >> >> I'm afraid if you have a disk then this is something you *should* do >> in gmsh instead if that's possible. By the time libMesh reads a >> first-order triangulation, curvature information has been discarded, >> and we don't try to reconstruct it, so you'll still get straight-sided >> elements rather than curved-sided elements. The only way to get the >> best second-order boundary geometry is to patch it up manually, >> looping over boundary nodes and "snapping" them to the curve where you >> know they should be. >> --- >> Roy >> >> >> ------------------------------------------------------------ >> ------------------ >> Check out the vibrant tech community on one of the world's most >> engaging tech sites, Slashdot.org! http://sdm.link/slashdot >> _______________________________________________ >> Libmesh-users mailing list >> Lib...@li... >> https://lists.sourceforge.net/lists/listinfo/libmesh-users >> > > |
From: Alexander L. <ale...@gm...> - 2017-11-22 21:36:57
|
I've never used pygmsh, but I know if you're running gmsh from the command line, passing the option `-order 2` will create second order elements. On Wed, Nov 22, 2017 at 2:14 PM, Roy Stogner <roy...@ic...> wrote: > > On Wed, 22 Nov 2017, Zack Vitoh wrote: > > GmshIO gmsh_io(mesh); >> gmsh_io.read("disk.vtu"); >> >> but--and I am not sure if this is possible--imposing additional nodes on >> each element generated in the triangularization generated by gmsh with >> nodes such that I have, say, Tri6 elements. >> > > You have Tri3 elements now? After the read calling > mesh.all_second_order(); > will turn them into Tri6 elements. > > Is this possible through libmesh, or something I would I have to do >> something in gmsh instead? >> > > I'm afraid if you have a disk then this is something you *should* do > in gmsh instead if that's possible. By the time libMesh reads a > first-order triangulation, curvature information has been discarded, > and we don't try to reconstruct it, so you'll still get straight-sided > elements rather than curved-sided elements. The only way to get the > best second-order boundary geometry is to patch it up manually, > looping over boundary nodes and "snapping" them to the curve where you > know they should be. > --- > Roy > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Roy S. <roy...@ic...> - 2017-11-22 21:14:44
|
On Wed, 22 Nov 2017, Zack Vitoh wrote: > GmshIO gmsh_io(mesh); > gmsh_io.read("disk.vtu"); > > but--and I am not sure if this is possible--imposing additional nodes on > each element generated in the triangularization generated by gmsh with > nodes such that I have, say, Tri6 elements. You have Tri3 elements now? After the read calling mesh.all_second_order(); will turn them into Tri6 elements. > Is this possible through libmesh, or something I would I have to do > something in gmsh instead? I'm afraid if you have a disk then this is something you *should* do in gmsh instead if that's possible. By the time libMesh reads a first-order triangulation, curvature information has been discarded, and we don't try to reconstruct it, so you'll still get straight-sided elements rather than curved-sided elements. The only way to get the best second-order boundary geometry is to patch it up manually, looping over boundary nodes and "snapping" them to the curve where you know they should be. --- Roy |
From: Zack V. <jan...@gm...> - 2017-11-22 20:52:40
|
I have a very simple PyGmsh script to produce, for example, a triangulation of the disk (attached) I'd like to use this in "Introduduction: Example 3" (solving 2D poisson) by replacing MeshTools::Generation::build_square (mesh, n_elements_1dim, n_elements_1dim, xmin_mesh, xmax_mesh, ymin_mesh, ymax_mesh, QUAD9); with GmshIO gmsh_io(mesh); gmsh_io.read("disk.vtu"); but--and I am not sure if this is possible--imposing additional nodes on each element generated in the triangularization generated by gmsh with nodes such that I have, say, Tri6 elements. Is this possible through libmesh, or something I would I have to do something in gmsh instead? Thanks for reading, and Happy Thanksgiving |
From: Zack V. <jan...@gm...> - 2017-11-22 18:47:59
|
Thanks Ben and Roy, I will try configuring with these options and take note of the potential licensing trouble for later. I have a related question (using a mesh created through gmsh through pygmsh), but I'll ask that in another email. On Tue, Nov 21, 2017 at 9:55 AM, Roy Stogner <roy...@ic...> wrote: > > On Mon, 20 Nov 2017, Zack Vitoh wrote: > > This does not work for me, as it does not produce anything >> >> there is a flag >> >> #ifdef LIBMESH_HAVE_TRIANGLE >> >> which is apparently false >> > > Right - if you don't have Triangle enabled, we can't run programs > which require Triangle. > > I looked at 'configure' and it says >> >> # Triangle -- enabled unless --enable-strict-lgpl is specified >> >> I did not specify this 'enable-strict-lgpl', does anyone know why this >> happened? >> > > I'm afraid this is a misleading comment: "--enable-strict-lgpl" is the > *default*, because we don't want to make it easy for users to get into > licensing trouble by accident. You need to specify > "--disable-strict-lgpl" if you want non-LGPL-licensing-compatible > third party libraries incorporated into your libMesh build. > --- > Roy > |
From: Alexander L. <ale...@gm...> - 2017-11-21 23:47:31
|
All, I get the following errors when running valgrind in parallel on a MOOSE test case. I could probably figure this out with time, but if someone smarter than I could quickly deduce whether these are false positives or something real then I would definitely not complain. (I've trimmed the output to 1 process; the errors for every other process were the same). ==25211== Memcheck, a memory error detector ==25211== Copyright (C) 2002-2017, and GNU GPL'd, by Julian Seward et al. ==25211== Using Valgrind-3.14.0.GIT and LibVEX; rerun with -h for copyright info ==25211== Command: ./combined-oprof -i cleaner_input.i ==25211== ==25211== ==25211== HEAP SUMMARY: ==25211== in use at exit: 127,864 bytes in 25 blocks ==25211== total heap usage: 634,458 allocs, 617,914 frees, 90,972,624 bytes allocated ==25211== ==25211== 192 bytes in 1 blocks are definitely lost in loss record 3 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11C9E6E0: MPIR_Datatype_set_contents.isra.0.constprop.4 (mpir_datatype.h:562) ==25211== by 0x11C9EE4C: MPIR_Type_contiguous_impl (type_contiguous.c:173) ==25211== by 0x11C9F2A4: PMPI_Type_contiguous (type_contiguous.c:300) ==25211== by 0x4E61CDA: PMPI_Type_contiguous (libmpiwrap.c:2718) ==25211== by 0x711FD1C: DataType (parallel.h:307) ==25211== by 0x711FD1C: StandardType (parallel_hilbert.h:60) ==25211== by 0x711FD1C: StandardType (parallel_implementation.h:212) ==25211== by 0x711FD1C: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== by 0x6FACD83: void libMesh::MeshCommunication::find_global_indices<libMesh::MeshBase::element_iterator>(libMesh::Parallel::Communicator const&, libMesh::BoundingBox const&, libMesh::MeshBase::element_iterator const&, libMesh::MeshBase::element_iterator const&, std::vector<unsigned int, std::allocator<unsigned int> >&) const (mesh_communication_global_indices.C:704) ==25211== by 0x7138B97: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:228) ==25211== by 0x713D1BD: libMesh::Partitioner::partition(libMesh::MeshBase&, unsigned int) (partitioner.C:75) ==25211== ==25211== 200 bytes in 1 blocks are definitely lost in loss record 4 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11CA65E2: MPIR_Datatype_set_contents.isra.0.constprop.1 (mpir_datatype.h:562) ==25211== by 0x11CA6CCB: MPIR_Type_create_struct_impl (type_create_struct.c:62) ==25211== by 0x11CA7104: PMPI_Type_create_struct (type_create_struct.c:157) ==25211== by 0x4E62FE5: PMPI_Type_create_struct (libmpiwrap.c:2727) ==25211== by 0x70700FF: StandardType (parallel_algebra.h:290) ==25211== by 0x70700FF: void libMesh::Parallel::Communicator::min<libMesh::Point>(libMesh::Point&) const (parallel_implementation.h:1591) ==25211== by 0x7067C93: libMesh::MeshTools::create_bounding_box(libMesh::MeshBase const&) (mesh_tools.C:345) ==25211== by 0x7138B6F: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:227) ==25211== by 0x713D1BD: libMesh::Partitioner::partition(libMesh::MeshBase&, unsigned int) (partitioner.C:75) ==25211== by 0x6F8918C: partition (mesh_base.h:728) ==25211== by 0x6F8918C: libMesh::MeshBase::prepare_for_use(bool, bool) (mesh_base.C:264) ==25211== ==25211== 200 bytes in 1 blocks are definitely lost in loss record 5 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11CAAEE0: MPIR_Datatype_set_contents.isra.0.constprop.3 (mpir_datatype.h:562) ==25211== by 0x11CAB861: PMPI_Type_create_resized (type_create_resized.c:204) ==25211== by 0x4E6343F: PMPI_Type_create_resized (libmpiwrap.c:2729) ==25211== by 0x7070120: StandardType (parallel_algebra.h:297) ==25211== by 0x7070120: void libMesh::Parallel::Communicator::min<libMesh::Point>(libMesh::Point&) const (parallel_implementation.h:1591) ==25211== by 0x7067C93: libMesh::MeshTools::create_bounding_box(libMesh::MeshBase const&) (mesh_tools.C:345) ==25211== by 0x7138B6F: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:227) ==25211== by 0x713D1BD: libMesh::Partitioner::partition(libMesh::MeshBase&, unsigned int) (partitioner.C:75) ==25211== by 0x6F8918C: partition (mesh_base.h:728) ==25211== by 0x6F8918C: libMesh::MeshBase::prepare_for_use(bool, bool) (mesh_base.C:264) ==25211== by 0x70B2669: libMesh::UnstructuredMesh::read(std::__cxx11::basic_string<char, std::char_traits<char>, std::allocator<char> > const&, void*, bool, bool) (unstructured_mesh.C:626) ==25211== ==25211== 200 bytes in 1 blocks are definitely lost in loss record 6 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11CAAEE0: MPIR_Datatype_set_contents.isra.0.constprop.3 (mpir_datatype.h:562) ==25211== by 0x11CAB861: PMPI_Type_create_resized (type_create_resized.c:204) ==25211== by 0x4E6343F: PMPI_Type_create_resized (libmpiwrap.c:2729) ==25211== by 0x711FAD5: StandardType (parallel_implementation.h:271) ==25211== by 0x711FAD5: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== by 0x6FACD83: void libMesh::MeshCommunication::find_global_indices<libMesh::MeshBase::element_iterator>(libMesh::Parallel::Communicator const&, libMesh::BoundingBox const&, libMesh::MeshBase::element_iterator const&, libMesh::MeshBase::element_iterator const&, std::vector<unsigned int, std::allocator<unsigned int> >&) const (mesh_communication_global_indices.C:704) ==25211== by 0x7138B97: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:228) ==25211== by 0x713D1BD: libMesh::Partitioner::partition(libMesh::MeshBase&, unsigned int) (partitioner.C:75) ==25211== by 0x6F8918C: partition (mesh_base.h:728) ==25211== by 0x6F8918C: libMesh::MeshBase::prepare_for_use(bool, bool) (mesh_base.C:264) ==25211== ==25211== 216 bytes in 1 blocks are definitely lost in loss record 8 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11CA65E2: MPIR_Datatype_set_contents.isra.0.constprop.1 (mpir_datatype.h:562) ==25211== by 0x11CA6CCB: MPIR_Type_create_struct_impl (type_create_struct.c:62) ==25211== by 0x11CA7104: PMPI_Type_create_struct (type_create_struct.c:157) ==25211== by 0x4E62FE5: PMPI_Type_create_struct (libmpiwrap.c:2727) ==25211== by 0x711FAB1: StandardType (parallel_implementation.h:264) ==25211== by 0x711FAB1: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== by 0x6FACD83: void libMesh::MeshCommunication::find_global_indices<libMesh::MeshBase::element_iterator>(libMesh::Parallel::Communicator const&, libMesh::BoundingBox const&, libMesh::MeshBase::element_iterator const&, libMesh::MeshBase::element_iterator const&, std::vector<unsigned int, std::allocator<unsigned int> >&) const (mesh_communication_global_indices.C:704) ==25211== by 0x7138B97: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:228) ==25211== by 0x713D1BD: libMesh::Partitioner::partition(libMesh::MeshBase&, unsigned int) (partitioner.C:75) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 9 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20398: MPIR_Dataloop_alloc_and_copy (dataloop.c:384) ==25211== by 0x11D22DC0: MPIR_Dataloop_create_contiguous (dataloop_create_contig.c:79) ==25211== by 0x11D22600: MPIR_Dataloop_create_blockindexed (dataloop_create_blockindexed.c:95) ==25211== by 0x11D24B46: MPIR_Dataloop_create_struct (dataloop_create_struct.c:174) ==25211== by 0x11D218FB: MPIR_Dataloop_create (dataloop_create.c:305) ==25211== by 0x11C9878F: MPIR_Type_commit (type_commit.c:60) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 10 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20398: MPIR_Dataloop_alloc_and_copy (dataloop.c:384) ==25211== by 0x11D22DC0: MPIR_Dataloop_create_contiguous (dataloop_create_contig.c:79) ==25211== by 0x11D22600: MPIR_Dataloop_create_blockindexed (dataloop_create_blockindexed.c:95) ==25211== by 0x11D24B46: MPIR_Dataloop_create_struct (dataloop_create_struct.c:174) ==25211== by 0x11D218FB: MPIR_Dataloop_create (dataloop_create.c:305) ==25211== by 0x11C987B2: MPIR_Type_commit (type_commit.c:68) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 11 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D206E9: MPIR_Dataloop_dup (dataloop.c:616) ==25211== by 0x11D20DF8: MPIR_Dataloop_create (dataloop_create.c:154) ==25211== by 0x11C9878F: MPIR_Type_commit (type_commit.c:60) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x7070128: StandardType (parallel_algebra.h:302) ==25211== by 0x7070128: void libMesh::Parallel::Communicator::min<libMesh::Point>(libMesh::Point&) const (parallel_implementation.h:1591) ==25211== by 0x7067C93: libMesh::MeshTools::create_bounding_box(libMesh::MeshBase const&) (mesh_tools.C:345) ==25211== by 0x7138B6F: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:227) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 12 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D206E9: MPIR_Dataloop_dup (dataloop.c:616) ==25211== by 0x11D20DF8: MPIR_Dataloop_create (dataloop_create.c:154) ==25211== by 0x11C987B2: MPIR_Type_commit (type_commit.c:68) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x7070128: StandardType (parallel_algebra.h:302) ==25211== by 0x7070128: void libMesh::Parallel::Communicator::min<libMesh::Point>(libMesh::Point&) const (parallel_implementation.h:1591) ==25211== by 0x7067C93: libMesh::MeshTools::create_bounding_box(libMesh::MeshBase const&) (mesh_tools.C:345) ==25211== by 0x7138B6F: libMesh::Partitioner::partition_unpartitioned_elements(libMesh::MeshBase&, unsigned int) (partitioner.C:227) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 13 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20398: MPIR_Dataloop_alloc_and_copy (dataloop.c:384) ==25211== by 0x11D22DC0: MPIR_Dataloop_create_contiguous (dataloop_create_contig.c:79) ==25211== by 0x11D21194: MPIR_Dataloop_create (dataloop_create.c:169) ==25211== by 0x11C9878F: MPIR_Type_commit (type_commit.c:60) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x711FD24: commit (parallel.h:337) ==25211== by 0x711FD24: DataType (parallel.h:308) ==25211== by 0x711FD24: StandardType (parallel_hilbert.h:60) ==25211== by 0x711FD24: StandardType (parallel_implementation.h:212) ==25211== by 0x711FD24: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== ==25211== 232 bytes in 1 blocks are definitely lost in loss record 14 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20398: MPIR_Dataloop_alloc_and_copy (dataloop.c:384) ==25211== by 0x11D22DC0: MPIR_Dataloop_create_contiguous (dataloop_create_contig.c:79) ==25211== by 0x11D21194: MPIR_Dataloop_create (dataloop_create.c:169) ==25211== by 0x11C987B2: MPIR_Type_commit (type_commit.c:68) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x711FD24: commit (parallel.h:337) ==25211== by 0x711FD24: DataType (parallel.h:308) ==25211== by 0x711FD24: StandardType (parallel_hilbert.h:60) ==25211== by 0x711FD24: StandardType (parallel_implementation.h:212) ==25211== by 0x711FD24: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== ==25211== 264 bytes in 1 blocks are definitely lost in loss record 15 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20398: MPIR_Dataloop_alloc_and_copy (dataloop.c:384) ==25211== by 0x11D23D50: MPIR_Dataloop_create_indexed (dataloop_create_indexed.c:206) ==25211== by 0x11D24FB0: DLOOP_Dataloop_create_flattened_struct (dataloop_create_struct.c:696) ==25211== by 0x11D24FB0: MPIR_Dataloop_create_struct (dataloop_create_struct.c:241) ==25211== by 0x11D218FB: MPIR_Dataloop_create (dataloop_create.c:305) ==25211== by 0x11C9878F: MPIR_Type_commit (type_commit.c:60) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x711FAB9: StandardType (parallel_implementation.h:267) ==25211== by 0x711FAB9: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== ==25211== 264 bytes in 1 blocks are definitely lost in loss record 16 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D206E9: MPIR_Dataloop_dup (dataloop.c:616) ==25211== by 0x11D20DF8: MPIR_Dataloop_create (dataloop_create.c:154) ==25211== by 0x11C9878F: MPIR_Type_commit (type_commit.c:60) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x711FADD: StandardType (parallel_implementation.h:277) ==25211== by 0x711FADD: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== by 0x6FACD83: void libMesh::MeshCommunication::find_global_indices<libMesh::MeshBase::element_iterator>(libMesh::Parallel::Communicator const&, libMesh::BoundingBox const&, libMesh::MeshBase::element_iterator const&, libMesh::MeshBase::element_iterator const&, std::vector<unsigned int, std::allocator<unsigned int> >&) const (mesh_communication_global_indices.C:704) ==25211== ==25211== 440 bytes in 1 blocks are possibly lost in loss record 18 of 25 ==25211== at 0x4C2CAED: malloc (vg_replace_malloc.c:299) ==25211== by 0x11DE452A: trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11DE4C26: MPL_trmalloc (in /home/lindad/mpich/installed/lib/libmpi.so.0.0.0) ==25211== by 0x11D20646: MPIR_Dataloop_struct_alloc (dataloop.c:565) ==25211== by 0x11D24691: MPIR_Dataloop_create_struct (dataloop_create_struct.c:290) ==25211== by 0x11D218FB: MPIR_Dataloop_create (dataloop_create.c:305) ==25211== by 0x11C987B2: MPIR_Type_commit (type_commit.c:68) ==25211== by 0x11C9887E: MPIR_Type_commit_impl (type_commit.c:106) ==25211== by 0x11C98B10: PMPI_Type_commit (type_commit.c:177) ==25211== by 0x4E444F1: PMPI_Type_commit (libmpiwrap.c:1769) ==25211== by 0x711FAB9: StandardType (parallel_implementation.h:267) ==25211== by 0x711FAB9: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::binsort() (parallel_sort.C:166) ==25211== by 0x7124A78: libMesh::Parallel::Sort<std::pair<Hilbert::HilbertIndices, unsigned long>, unsigned int>::sort() (parallel_sort.C:80) ==25211== ==25211== LEAK SUMMARY: ==25211== definitely lost: 2,928 bytes in 13 blocks ==25211== indirectly lost: 0 bytes in 0 blocks ==25211== possibly lost: 440 bytes in 1 blocks ==25211== still reachable: 124,496 bytes in 11 blocks ==25211== suppressed: 0 bytes in 0 blocks ==25211== Reachable blocks (those to which a pointer was found) are not shown. ==25211== To see them, rerun with: --leak-check=full --show-leak-kinds=all ==25211== ==25211== For counts of detected and suppressed errors, rerun with: -v ==25211== ERROR SUMMARY: 14 errors from 14 contexts (suppressed: 0 from 0) |
From: Roy S. <roy...@ic...> - 2017-11-21 14:55:22
|
On Mon, 20 Nov 2017, Zack Vitoh wrote: > This does not work for me, as it does not produce anything > > there is a flag > > #ifdef LIBMESH_HAVE_TRIANGLE > > which is apparently false Right - if you don't have Triangle enabled, we can't run programs which require Triangle. > I looked at 'configure' and it says > > # Triangle -- enabled unless --enable-strict-lgpl is specified > > I did not specify this 'enable-strict-lgpl', does anyone know why this > happened? I'm afraid this is a misleading comment: "--enable-strict-lgpl" is the *default*, because we don't want to make it easy for users to get into licensing trouble by accident. You need to specify "--disable-strict-lgpl" if you want non-LGPL-licensing-compatible third party libraries incorporated into your libMesh build. --- Roy |
From: Tobias M. <tob...@un...> - 2017-11-21 13:50:51
|
Ah! ok, sure. I actually saw this before but than completely forgot about it. Thank you for the explanation; now, it follows my expectations :-) On 21.11.2017 14:40, David Knezevic wrote: > > On Tue, Nov 21, 2017 at 7:10 AM, Tobias Moehle > <tob...@un... <mailto:tob...@un...>> > wrote: > > Dear all, > > I have some problem with Lagrange multipliers: Maybe I > misinterpret my results, but somehow they don't have the effect I > expected. > > Looking at the solution of systems_of_equations_ex3, (using > paraview) I see the (real part of) pressure r_p to be 0 at time > step 0 and than it jumps to ~10, where it stays for the rest of > the simulation. But isn't it actually enforced to stay at 0 "forever"? > > > > This is beacuse the following lines: > > // We can set the mean of the pressure by setting Falpha. Typically > // a value of zero is chosen, but the value should be arbitrary. > navier_stokes_system.rhs->add(navier_stokes_system.rhs->size()-1, 10.); > > So in this case we set the mean pressure to be 10. We do this just for > illustration purposes, to show that the Lagrange multiplier can be > used to set the mean pressure to any value you like (but as mentioned > in the comment normally one would just set it to zero). If you want to > change the value, just change the "10" in the line above. > > David |
From: David K. <dav...@ak...> - 2017-11-21 13:40:49
|
On Tue, Nov 21, 2017 at 7:10 AM, Tobias Moehle <tob...@un... > wrote: > Dear all, > > I have some problem with Lagrange multipliers: Maybe I misinterpret my > results, but somehow they don't have the effect I expected. > > Looking at the solution of systems_of_equations_ex3, (using paraview) I > see the (real part of) pressure r_p to be 0 at time step 0 and than it > jumps to ~10, where it stays for the rest of the simulation. But isn't it > actually enforced to stay at 0 "forever"? > This is beacuse the following lines: // We can set the mean of the pressure by setting Falpha. Typically // a value of zero is chosen, but the value should be arbitrary. navier_stokes_system.rhs->add(navier_stokes_system.rhs->size()-1, 10.); So in this case we set the mean pressure to be 10. We do this just for illustration purposes, to show that the Lagrange multiplier can be used to set the mean pressure to any value you like (but as mentioned in the comment normally one would just set it to zero). If you want to change the value, just change the "10" in the line above. David |
From: Tobias M. <tob...@un...> - 2017-11-21 13:26:09
|
Dear all, I have some problem with Lagrange multipliers: Maybe I misinterpret my results, but somehow they don't have the effect I expected. Looking at the solution of systems_of_equations_ex3, (using paraview) I see the (real part of) pressure r_p to be 0 at time step 0 and than it jumps to ~10, where it stays for the rest of the simulation. But isn't it actually enforced to stay at 0 "forever"? I also tried to change its value to some other fixed (and time-dependent) ones: diff --git a/examples/systems_of_equations/systems_of_equations_ex3/systems_of_equations_ex3.C b/examples/systems_of_equations/systems_of_equations_ex3/systems_of_equations_ex3.C index b626203..f576138 100644 --- a/examples/systems_of_equations/systems_of_equations_ex3/systems_of_equations_ex3.C +++ b/examples/systems_of_equations/systems_of_equations_ex3/systems_of_equations_ex3.C @@ -614,8 +614,15 @@ void assemble_stokes (EquationSystems & es, // negative one. Here we do not. for (unsigned int i=0; i<n_p_dofs; i++) { - Kp_alpha(i,0) += JxW[qp]*psi[i][qp]; - Kalpha_p(0,i) += JxW[qp]*psi[i][qp]; + // original: set p=0 + //Kp_alpha(i,0) += JxW[qp]*psi[i][qp]; + //Kalpha_p(0,i) += JxW[qp]*psi[i][qp]; + // lets fix the pressure to some finite value: + //Kp_alpha(i,0) += JxW[qp]*psi[i][qp]-1.*JxW[qp]; + //Kalpha_p(0,i) += JxW[qp]*psi[i][qp]-1.*JxW[qp]; + // lets make the pressure growing with time. + Kp_alpha(i,0) += JxW[qp]*psi[i][qp]-theta*dt*JxW[qp]; + Kalpha_p(0,i) += JxW[qp]*psi[i][qp]-theta*dt*JxW[qp]; for (unsigned int j=0; j<n_u_dofs; j++) { Kpu(i,j) += JxW[qp]*psi[i][qp]*dphi[j][qp](0); However, even though the values change a bit, the general behaviour stays the same. Could someone please help me clarifying, what is wrong here? Thanks in advance, Tobias |
From: Zack V. <jan...@gm...> - 2017-11-21 00:27:25
|
This does not work for me, as it does not produce anything there is a flag #ifdef LIBMESH_HAVE_TRIANGLE which is apparently false I looked at 'configure' and it says # Triangle -- enabled unless --enable-strict-lgpl is specified I did not specify this 'enable-strict-lgpl', does anyone know why this happened? |
From: Alexander L. <ale...@gm...> - 2017-11-18 21:51:23
|
John's on the right rack. The correct flag is: "-order 2" On Fri, Nov 17, 2017 at 6:35 AM, Roy Stogner <roy...@ic...> wrote: > > On Thu, 16 Nov 2017, John Peterson wrote: > > I think you need to run gmsh with a special option (maybe "-2") to make >> sure that you get EDGE3 elements. Those are required for using SECOND, >> LAGRANGE finite elements. >> > > Alternatively, if you have a mesh object with lower-order elements, > you can convert them in libMesh to higher-order elements using > > mesh.all_second_order(); > --- > Roy > > > ------------------------------------------------------------ > ------------------ > Check out the vibrant tech community on one of the world's most > engaging tech sites, Slashdot.org! http://sdm.link/slashdot > _______________________________________________ > Libmesh-users mailing list > Lib...@li... > https://lists.sourceforge.net/lists/listinfo/libmesh-users > |
From: Renato P. <re...@gm...> - 2017-11-18 21:25:12
|
So .... I realized my issue is in fact a linear solver convergence issue... It seems that I created a difficult matrix to solve with the pressure variable being hacked sort of manually. Getting the tolerance down and increasing the solver iterations did the job (although runtime went up). Please disregard the previous questions about the mesh... the mesh is ok. On the other hand: does libMesh have tools to assess the quality of the matrix being generated? To understand the results of the linear solver, I am using the function "print_converged_reason". Any better idea? Thanks and sorry for the previous stuff. Renato On Sat, Nov 18, 2017 at 3:33 PM, Renato Poli <re...@gm...> wrote: > Hi > > I am struggling here with boundary conditions in a system of equations. > I am solving a system for 3 vars: U,V,P (displacements and pressure). > I decoupled pressure and displacements as a first attempt (in fact, Kpp > gets to be an identity matrix), Fp is zero. > > To test, I generated a mesh in CGAL and a similar one in GMSH. > I translate internally the CGAL mesh to libMesh. > For GMSH, I am using gmsh_io::read function. > > It happens that, in CGAL translated mesh, when I set a boundary condition > in variable P, the solution for U and V gets invalid. The same does _not_ > happen in the mesh generated in GMSH (works beautifully). > > So, I believe the problem would be in the translation from CGAL to > libMesh, which is indeed very straightforward. I had already validated the > translation for 2 variables (U,V). It looked fine. > > Would you have an advice? What might be happening? In which case decoupled > variables might interact in the solver? > > I am currently in a reverse engineering effort in the gmsh_io::read > function ... (no pain, no gain...) > > Thanks, > Renato > > > |
From: Renato P. <re...@gm...> - 2017-11-18 17:33:11
|
Hi I am struggling here with boundary conditions in a system of equations. I am solving a system for 3 vars: U,V,P (displacements and pressure). I decoupled pressure and displacements as a first attempt (in fact, Kpp gets to be an identity matrix), Fp is zero. To test, I generated a mesh in CGAL and a similar one in GMSH. I translate internally the CGAL mesh to libMesh. For GMSH, I am using gmsh_io::read function. It happens that, in CGAL translated mesh, when I set a boundary condition in variable P, the solution for U and V gets invalid. The same does _not_ happen in the mesh generated in GMSH (works beautifully). So, I believe the problem would be in the translation from CGAL to libMesh, which is indeed very straightforward. I had already validated the translation for 2 variables (U,V). It looked fine. Would you have an advice? What might be happening? In which case decoupled variables might interact in the solver? I am currently in a reverse engineering effort in the gmsh_io::read function ... (no pain, no gain...) Thanks, Renato |
From: Roy S. <roy...@ic...> - 2017-11-17 13:35:47
|
On Thu, 16 Nov 2017, John Peterson wrote: > I think you need to run gmsh with a special option (maybe "-2") to make > sure that you get EDGE3 elements. Those are required for using SECOND, > LAGRANGE finite elements. Alternatively, if you have a mesh object with lower-order elements, you can convert them in libMesh to higher-order elements using mesh.all_second_order(); --- Roy |
From: John P. <jwp...@gm...> - 2017-11-17 00:14:52
|
On Thu, Nov 16, 2017 at 5:08 PM, Renato Poli <re...@gm...> wrote: > Hi > > I generated a square mesh in GMSH and exported to read into libMesh. > > I use second order approximation for variables. > > After reading, libMesh is exiting with the following error: > > > ERROR: Bad ElemType = EDGE2 for SECOND order approximation! > Stack frames: 14 > 0: libMesh::print_trace(std::ostream&) > 1: libMesh::MacroFunctions::report_error(char const*, int, char > const*, char const*) > 2: /usr/local/lib/libmesh_opt.so.0(+0x4c8cfb) [0x7fa7cc123cfb] > 3: libMesh::FEInterface::n_dofs_at_node(unsigned int, libMesh::FEType > const&, libMesh::ElemType, unsigned int) > 4: libMesh::DofMap::reinit(libMesh::MeshBase&) > 5: libMesh::DofMap::distribute_dofs(libMesh::MeshBase&) > 6: libMesh::System::init_data() > 7: libMesh::ImplicitSystem::init_data() > 8: libMesh::LinearImplicitSystem::init_data() > 9: libMesh::System::init() > 10: libMesh::EquationSystems::init() > 11: obj/bin/abada_sc4() [0x57c0ed] > 12: __libc_start_main > 13: obj/bin/abada_sc4() [0x4322f9] > [0] src/fe/fe_lagrange.C, line 642, compiled Aug 14 2017 at 15:16:26 > application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 > [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1 > : > system msg for write_line failure : Bad file descriptor > > > My GMSH mesh looks good - I have added boundary conditions, my msh > file has PhysicalNames etc. > > Everything looks nice. > > Any idea on what might be wrong? > > This is my ".geo" file if anyone wants to enjoy a gmsh adventure... > > cl__1 = 1; > Point(1) = {0, 0, 0, 1}; > Point(2) = {0, 100, 0, 1}; > Point(3) = {100, 100, 0, 1}; > Point(4) = {100, 0, 0, 1}; > Line(1) = {1, 2}; > Line(2) = {2, 3}; > Line(3) = {3, 4}; > Line(4) = {4, 1}; > Line Loop(6) = {1, 2, 3, 4}; > Plane Surface(7) = {6}; > Physical Line("ext") = {1, 2, 3, 4}; > I think you need to run gmsh with a special option (maybe "-2") to make sure that you get EDGE3 elements. Those are required for using SECOND, LAGRANGE finite elements. -- John |
From: Renato P. <re...@gm...> - 2017-11-17 00:08:31
|
Hi I generated a square mesh in GMSH and exported to read into libMesh. I use second order approximation for variables. After reading, libMesh is exiting with the following error: ERROR: Bad ElemType = EDGE2 for SECOND order approximation! Stack frames: 14 0: libMesh::print_trace(std::ostream&) 1: libMesh::MacroFunctions::report_error(char const*, int, char const*, char const*) 2: /usr/local/lib/libmesh_opt.so.0(+0x4c8cfb) [0x7fa7cc123cfb] 3: libMesh::FEInterface::n_dofs_at_node(unsigned int, libMesh::FEType const&, libMesh::ElemType, unsigned int) 4: libMesh::DofMap::reinit(libMesh::MeshBase&) 5: libMesh::DofMap::distribute_dofs(libMesh::MeshBase&) 6: libMesh::System::init_data() 7: libMesh::ImplicitSystem::init_data() 8: libMesh::LinearImplicitSystem::init_data() 9: libMesh::System::init() 10: libMesh::EquationSystems::init() 11: obj/bin/abada_sc4() [0x57c0ed] 12: __libc_start_main 13: obj/bin/abada_sc4() [0x4322f9] [0] src/fe/fe_lagrange.C, line 642, compiled Aug 14 2017 at 15:16:26 application called MPI_Abort(MPI_COMM_WORLD, 1) - process 0 [unset]: write_line error; fd=-1 buf=:cmd=abort exitcode=1 : system msg for write_line failure : Bad file descriptor My GMSH mesh looks good - I have added boundary conditions, my msh file has PhysicalNames etc. Everything looks nice. Any idea on what might be wrong? This is my ".geo" file if anyone wants to enjoy a gmsh adventure... cl__1 = 1; Point(1) = {0, 0, 0, 1}; Point(2) = {0, 100, 0, 1}; Point(3) = {100, 100, 0, 1}; Point(4) = {100, 0, 0, 1}; Line(1) = {1, 2}; Line(2) = {2, 3}; Line(3) = {3, 4}; Line(4) = {4, 1}; Line Loop(6) = {1, 2, 3, 4}; Plane Surface(7) = {6}; Physical Line("ext") = {1, 2, 3, 4}; Thanks. Renato |
From: Roy S. <roy...@ic...> - 2017-11-16 16:25:36
|
Oh, and now that I've noticed: please keep Cc:ing the mailing list; the more helpful discussion that gets archived where future users' search engines can find it, the better. Thanks, --- Roy On Thu, 16 Nov 2017, Roy Stogner wrote: > > On Wed, 15 Nov 2017, Zack Vitoh wrote: > >> Pure virtual functions (like those found in the PointLocatorBase and >> related classes) were completely new to me earlier today, but I believe I >> understand the proper syntax, at least, so if it's of use to anyone else, >> here is an example of one way to use the PointLocatorBase class (to find >> the element 'elem_ploc' containing (0,-0.5,0)) >> >> UniquePtr<PointLocatorBase> >> my_point_locator(PointLocatorBase::build(TREE_ELEMENTS, mesh)); > > This should work, but it's not the most efficient way to go: > although PointLocatorTree::operator() is O(log N), > PointLocatorTree::build() is O(N), so you only get a fast amortized > lookup if you can reuse the same point locator over and over again. > > Try > > UniquePtr<PointLocatorBase> my_point_locator = mesh.sub_point_locator(); > > That will create a sub-locator which reuses the same main locator > instead of building a new one each time. > >> Real mpl_tol = 2.0 * diam; >> my_point_locator->set_close_to_point_tol (mpl_tol); >> my_point_locator->enable_out_of_mesh_mode(); > > I assume diam is an element diameter? Then you're trying to find > points as far as two diameters away from any current element? I'm > afraid that's not guaranteed to work - if you have quads or other > non-affine elements in your mesh, you can have mapping functions which > are invertible on the elements (so the mesh is perfectly valid) but > which become singular far away from the elements (so the > transformations we do when checking whether an element contains a > point become invalid). Beware. > > Also, with a huge tolerance, you are going to have multiple elements > which "contain" a point, and the point locator may not return an > element which *actually* contains the point, even if one exists, if it > finds a merely close by element first. > >> const Elem* elem_ploc = my_point_locator->operator()( Point(0.,-0.5,0.) ); > > For operator(), terser syntax is: > > const Elem* elem_ploc = (*my_point_locator)( Point(0.,-0.5,0.) ); > > Knowing the full ugly syntax is still useful, unfortunately, for > debugging with gdb... > --- > Roy |