You can subscribe to this list here.
2003 
_{Jan}

_{Feb}

_{Mar}

_{Apr}

_{May}

_{Jun}

_{Jul}

_{Aug}

_{Sep}
(2) 
_{Oct}
(2) 
_{Nov}
(27) 
_{Dec}
(31) 

2004 
_{Jan}
(6) 
_{Feb}
(15) 
_{Mar}
(33) 
_{Apr}
(10) 
_{May}
(46) 
_{Jun}
(11) 
_{Jul}
(21) 
_{Aug}
(15) 
_{Sep}
(13) 
_{Oct}
(23) 
_{Nov}
(1) 
_{Dec}
(8) 
2005 
_{Jan}
(27) 
_{Feb}
(57) 
_{Mar}
(86) 
_{Apr}
(23) 
_{May}
(37) 
_{Jun}
(34) 
_{Jul}
(24) 
_{Aug}
(17) 
_{Sep}
(50) 
_{Oct}
(24) 
_{Nov}
(10) 
_{Dec}
(60) 
2006 
_{Jan}
(47) 
_{Feb}
(46) 
_{Mar}
(127) 
_{Apr}
(19) 
_{May}
(26) 
_{Jun}
(62) 
_{Jul}
(47) 
_{Aug}
(51) 
_{Sep}
(61) 
_{Oct}
(42) 
_{Nov}
(50) 
_{Dec}
(33) 
2007 
_{Jan}
(60) 
_{Feb}
(55) 
_{Mar}
(77) 
_{Apr}
(102) 
_{May}
(82) 
_{Jun}
(102) 
_{Jul}
(169) 
_{Aug}
(117) 
_{Sep}
(80) 
_{Oct}
(37) 
_{Nov}
(51) 
_{Dec}
(43) 
2008 
_{Jan}
(71) 
_{Feb}
(94) 
_{Mar}
(98) 
_{Apr}
(125) 
_{May}
(54) 
_{Jun}
(119) 
_{Jul}
(60) 
_{Aug}
(111) 
_{Sep}
(118) 
_{Oct}
(125) 
_{Nov}
(119) 
_{Dec}
(94) 
2009 
_{Jan}
(109) 
_{Feb}
(38) 
_{Mar}
(93) 
_{Apr}
(88) 
_{May}
(29) 
_{Jun}
(57) 
_{Jul}
(53) 
_{Aug}
(48) 
_{Sep}
(68) 
_{Oct}
(151) 
_{Nov}
(23) 
_{Dec}
(35) 
2010 
_{Jan}
(84) 
_{Feb}
(60) 
_{Mar}
(184) 
_{Apr}
(112) 
_{May}
(60) 
_{Jun}
(90) 
_{Jul}
(23) 
_{Aug}
(70) 
_{Sep}
(119) 
_{Oct}
(27) 
_{Nov}
(47) 
_{Dec}
(54) 
2011 
_{Jan}
(22) 
_{Feb}
(19) 
_{Mar}
(92) 
_{Apr}
(93) 
_{May}
(35) 
_{Jun}
(91) 
_{Jul}
(32) 
_{Aug}
(61) 
_{Sep}
(7) 
_{Oct}
(69) 
_{Nov}
(81) 
_{Dec}
(23) 
2012 
_{Jan}
(64) 
_{Feb}
(95) 
_{Mar}
(35) 
_{Apr}
(36) 
_{May}
(63) 
_{Jun}
(98) 
_{Jul}
(70) 
_{Aug}
(171) 
_{Sep}
(149) 
_{Oct}
(64) 
_{Nov}
(67) 
_{Dec}
(126) 
2013 
_{Jan}
(108) 
_{Feb}
(104) 
_{Mar}
(171) 
_{Apr}
(133) 
_{May}
(108) 
_{Jun}
(100) 
_{Jul}
(93) 
_{Aug}
(126) 
_{Sep}
(74) 
_{Oct}
(59) 
_{Nov}
(145) 
_{Dec}
(93) 
2014 
_{Jan}
(38) 
_{Feb}
(45) 
_{Mar}
(26) 
_{Apr}
(41) 
_{May}
(125) 
_{Jun}
(70) 
_{Jul}
(28) 
_{Aug}

_{Sep}

_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 






1
(4) 
2
(1) 
3

4
(2) 
5
(1) 
6
(1) 
7

8
(2) 
9

10

11

12

13
(6) 
14
(8) 
15
(1) 
16
(1) 
17
(2) 
18
(2) 
19
(2) 
20
(5) 
21
(15) 
22

23
(1) 
24

25
(5) 
26
(7) 
27
(7) 
28
(3) 
29
(18) 

From: Roy Stogner <roystgnr@ic...>  20080214 23:23:38

On Thu, 14 Feb 2008, Roy Stogner wrote: > This would break backwards compatibility with saved solution files > which use cubic or quartic discontinuous elements. Correction: this would break backwards compatibility with saved solution files which use cubic or quartic discontinuous elements in 2D, or which use quadratic, cubic, or quartic discontinuous elements in 3D; the odd ordering starts earlier in 3D.  Roy 
From: Roy Stogner <roystgnr@ic...>  20080214 23:10:48

We currently support p=0 through p=4 with discontinuous elements. As long as I'm messing with these elements to add second derivatives support, I'd like to add support for arbitrary p. However, the handcoded shape functions for p=3 and p=4 aren't really in the same natural order as the shape functions would be for p=5+, so I'd like to reorder them at the same time. This would break backwards compatibility with saved solution files which use cubic or quartic discontinuous elements. Are there any objections? I'm hoping that the number of people who are doing DG with high p elements and who can't conveniently regenerate their old solution files is zero.  Roy 
From: Roy Stogner <roystgnr@ic...>  20080214 20:27:40

On Thu, 14 Feb 2008, Roy Stogner wrote: > Thank you for the report. On first glance, it looks like the problem > here is that FE<>::init_shape_functions is what resizes the storage > space for second derivatives, but that function gets overridden in > FEXYZ, which was never updated with second derivative support. I'll > fix it in SVN this weekend. A beginning of a fix (it should work in 1D, or in 2D/3D if you make specific requests of the FEXYZ object which don't include calculating d2phi) is now in the SVN head. I'll get the rest of the fix in later, but anyone whose code uses XYZ elements might want to try out the latest subversion libMesh. I don't think we use discontinuous bases in examples, and my code is mostly C1 stuff not discontinuous, so we don't have any FEXYZ test coverage yet.  Roy 
From: Benjamin Kirk <benjamin.kirk@na...>  20080214 15:46:57

Thanks a lot for that! It just so happens that I've been messing around with the DofMap::compute_sparsity() to make it multithreaded  I'll see that your fix makes it in there. Also, now that Roy added a convenient way to get the continuity for a finite element family we should probably set implicit_neighbor_dofs=true whenever the finite element space is discontinuous. Ben On 2/13/08 3:55 PM, "Lorenzo Botti" <bottilorenzo@...> wrote: > Hi David and John, > > I have a dirty DG ex14. > It works with h and p refinement with a little change in > DofMap::compute_sparsity(...) after the condition > if(implicit_neighbor_dofs){....} > > > > Lorenzo > > Il giorno 13/feb/08, alle ore 15:38, John Peterson ha scritto: > >> Short answer: yes. >> >> Thread here: >> >> http://sourceforge.net/mailarchive/message.php?msg_id=41E893F9.9000104%40cfdl >> ab.ae.utexas.edu >> >> >> >> No examples yet but this would probably be a good idea. Ben  do you >> have any >> toy DG codes we could make into an example? >> >> J >> >> >> David Knezevic writes: >>> Hi all, I was just wondering if anyone has done any discontinuous >>> Galerkin computations using libMesh? I'm just asking this out of >>> curiousity, I'm not looking to do any DG computations at the >>> moment... >>> >>> Thanks, >>> Dave >>> >> >>  >> This SF.net email is sponsored by: Microsoft >> Defy all challenges. Microsoft(R) Visual Studio 2008. >> http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ >> _______________________________________________ >> Libmeshusers mailing list >> Libmeshusers@... >> https://lists.sourceforge.net/lists/listinfo/libmeshusers > >  > This SF.net email is sponsored by: Microsoft > Defy all challenges. Microsoft(R) Visual Studio 2008. > http://clk.atdmt.com/MRT/go/vse0120000070mrt/direct/01/ > _______________________________________________ > Libmeshusers mailing list > Libmeshusers@... > https://lists.sourceforge.net/lists/listinfo/libmeshusers 
From: Roy Stogner <roystgnr@ic...>  20080214 15:27:47

On Thu, 14 Feb 2008, Georg Wenig wrote: > using libmesh 0.6.2 compiled with support for second derivatives the following program crashes: > The error is an invalid read attempt in fe_map.C, see below: > > error: attempt to subscript container with outofbounds index 0, but container only holds 0 elements. > > > #5 0x0000002a95abc368 in FEBase::compute_single_point_map (this=0x5697b0, qw=@0x567660, elem=0x549d70, p=0) > at src/fe/fe_map.C:64 Thank you for the report. On first glance, it looks like the problem here is that FE<>::init_shape_functions is what resizes the storage space for second derivatives, but that function gets overridden in FEXYZ, which was never updated with second derivative support. I'll fix it in SVN this weekend.  Roy 
From: Georg Wenig <georg.wenig@we...>  20080214 15:18:31

Hi, using libmesh 0.6.2 compiled with support for second derivatives the following program crashes: // ======= code below ====== // #include "libmesh.h" #include "mesh.h" #include "mesh_generation.h" #include "equation_systems.h" #include "fe.h" #include "linear_implicit_system.h" #include "transient_system.h" #include "elem.h" Number initial_values( Point const& p, Parameters const& parameters, std::string const& sys_name, std::string const& unknown_name ) { return 0.0; } int main( int argc, char** argv ) { libMesh::init (argc, argv); { Mesh mesh( 1 ); MeshTools::Generation::build_line( mesh, 100, 0.5, 0.5, EDGE3 ); EquationSystems equation_systems( mesh ); TransientLinearImplicitSystem& system = equation_systems.add_system<TransientLinearImplicitSystem>( "test" ); // system.add_variable( "u", FIRST, LAGRANGE ); // fine system.add_variable( "u", FIRST, XYZ ); // crash equation_systems.init (); system.project_solution( initial_values, 0, equation_systems.parameters ); } return libMesh::close (); } // ===== end of code ======= // The error is an invalid read attempt in fe_map.C, see below: error: attempt to subscript container with outofbounds index 0, but container only holds 0 elements. #5 0x0000002a95abc368 in FEBase::compute_single_point_map (this=0x5697b0, qw=@0x567660, elem=0x549d70, p=0) at src/fe/fe_map.C:64 #6 0x0000002a95abe56c in FEBase::compute_affine_map (this=0x5697b0, qw=@0x567660, elem=0x549d70) at src/fe/fe_map.C:424 #7 0x0000002a95abe149 in FEBase::compute_map (this=0x5697b0, qw=@0x567660, elem=0x549d70) at src/fe/fe_map.C:481 #8 0x0000002a959e6229 in FE<1u, (libMeshEnums::FEFamily)5>::reinit (this=0x5697b0, elem=0x549d70, pts=0x0) at src/fe/fe.C:209 #9 0x0000002a95a3a953 in FEXYZ<1u>::reinit (this=0x5697b0, elem=0x549d70, pts=0x0) at /home/wenig/src/libmesh0.6.2/include/fe/fe.h:606 #10 0x0000002a95e338f3 in System::project_vector (this=0x54e610, fptr=0x411388 <initial_values(Point const&, Parameters const&, std::string const&, std::string const&)>, gptr=0, parameters=@0x7fbfffe438, new_vector=@0x54e590) at src/solvers/system_projection.C:1057 #11 0x0000002a95e34732 in System::project_solution (this=0x54e610, fptr=0x411388 <initial_values(Point const&, Parameters const&, std::string const&, std::string const&)>, gptr=0, parameters=@0x7fbfffe438) at src/solvers/system_projection.C:541 #12 0x0000000000411624 in main (argc=1, argv=0x7fbfffe628) at test.C:34 Second derivatives are not supported for the XYZ family, but the program doesn't actually use them, so maybe this should work? How would you implement second derivatives for the XYZ family? Just implementing the corresponding functions in fe_xyz_shape_?D.C can't be enough, as these functions are not even called in the example above... Regards, Georg _______________________________________________________________________ Jetzt neu! Schützen Sie Ihren PC mit McAfee und WEB.DE. 30 Tage kostenlos testen. http://www.pcsicherheit.web.de/startseite/?mc=022220 
From: Roy Stogner <roystgnr@ic...>  20080214 01:23:23

On Thu, 14 Feb 2008, Lorenzo Botti wrote: > I have tried to use DofMap::_dof_coupling but it seems that an issue > with amr arises... maybe I miss something obvious... > For example if I solve an advection diffusion equation in 3D with a > semiimplicit scheme I want to save the memory required to store the Kuv, > Kuw, Kvw, (and also Kvu, Kvw, Kwu,) blocks of each of my element matrices. > With _dof_coupling I can obtain the right sparsity pattern but then at > assembly time I'll need a sparse element matrix. I can instead decide to > assemble only Kuu, Kvv, Kww as tree different dense matrices. This works > without amr, It saves a lot of memory. If this saves a lot of memory, you may be doing something wrong. A triquadratic element has 27 degrees of freedom per component. If you're using the default 8byte floating point numbers, then using three 27x27 matrices would take up about 17kB, whereas using a (27*3)*(27*3) matrix would take up about 51kB. In any significant 3D problem, 51kB should be lost in the noise. But you may be right that the constrain_element* functions are ignoring _dof_coupling; I'll leave it to Ben to check on that.  Roy 
From: Lorenzo Botti <bottilorenzo@gm...>  20080214 00:28:06

Il giorno 13/feb/08, alle ore 03:03, Roy Stogner ha scritto: >> I output the matrix assembled in libmesh using PetSC function, I >> find lots of zero values in matrix. Because the matrix in Petsc is >> stored using sparse compressed strage format, zero values should not >> appear. > > That is incorrect. The sparsity pattern we tell PETSc to use comes > from the connectivity of your mesh, which is much more efficient than > building it on the fly as you assemble the matrix with your particular > equation. Some multivariable equations (NavierStokes, in > particular) and/or some choices of basis functions can then leave you > with zero entries. If your equation is of the former type you can use > DofObject::_dof_coupling to tell libMesh not to bother allocating > matrix entries that you know will always be zero. Hi Roy, thanks for the explanations but i have a doubt. I have tried to use DofMap::_dof_coupling but it seems that an issue with amr arises... maybe I miss something obvious... For example if I solve an advection diffusion equation in 3D with a semiimplicit scheme I want to save the memory required to store the Kuv, Kuw, Kvw, (and also Kvu, Kvw, Kwu,) blocks of each of my element matrices. With _dof_coupling I can obtain the right sparsity pattern but then at assembly time I'll need a sparse element matrix. I can instead decide to assemble only Kuu, Kvv, Kww as tree different dense matrices. This works without amr, It saves a lot of memory. If I want to use amr the problem is that the method DofMap::constrain_element_matrix_and_vector wants to constrain all my system variables at the same time preventing me to assemble Kuu, Kvv, Kww as different matrices. Is there a simple solution? Lorenzo 