You can subscribe to this list here.
2003 
_{Jan}
(4) 
_{Feb}
(1) 
_{Mar}
(9) 
_{Apr}
(2) 
_{May}
(7) 
_{Jun}
(1) 
_{Jul}
(1) 
_{Aug}
(4) 
_{Sep}
(12) 
_{Oct}
(8) 
_{Nov}
(3) 
_{Dec}
(4) 

2004 
_{Jan}
(1) 
_{Feb}
(21) 
_{Mar}
(31) 
_{Apr}
(10) 
_{May}
(12) 
_{Jun}
(15) 
_{Jul}
(4) 
_{Aug}
(6) 
_{Sep}
(5) 
_{Oct}
(11) 
_{Nov}
(43) 
_{Dec}
(13) 
2005 
_{Jan}
(25) 
_{Feb}
(12) 
_{Mar}
(49) 
_{Apr}
(19) 
_{May}
(104) 
_{Jun}
(60) 
_{Jul}
(10) 
_{Aug}
(42) 
_{Sep}
(15) 
_{Oct}
(12) 
_{Nov}
(6) 
_{Dec}
(4) 
2006 
_{Jan}
(1) 
_{Feb}
(6) 
_{Mar}
(31) 
_{Apr}
(17) 
_{May}
(5) 
_{Jun}
(95) 
_{Jul}
(38) 
_{Aug}
(44) 
_{Sep}
(6) 
_{Oct}
(8) 
_{Nov}
(21) 
_{Dec}

2007 
_{Jan}
(5) 
_{Feb}
(46) 
_{Mar}
(9) 
_{Apr}
(23) 
_{May}
(17) 
_{Jun}
(51) 
_{Jul}
(41) 
_{Aug}
(4) 
_{Sep}
(28) 
_{Oct}
(71) 
_{Nov}
(193) 
_{Dec}
(20) 
2008 
_{Jan}
(46) 
_{Feb}
(46) 
_{Mar}
(18) 
_{Apr}
(38) 
_{May}
(14) 
_{Jun}
(107) 
_{Jul}
(50) 
_{Aug}
(115) 
_{Sep}
(84) 
_{Oct}
(96) 
_{Nov}
(105) 
_{Dec}
(34) 
2009 
_{Jan}
(89) 
_{Feb}
(93) 
_{Mar}
(119) 
_{Apr}
(73) 
_{May}
(39) 
_{Jun}
(51) 
_{Jul}
(27) 
_{Aug}
(8) 
_{Sep}
(91) 
_{Oct}
(90) 
_{Nov}
(77) 
_{Dec}
(67) 
2010 
_{Jan}
(25) 
_{Feb}
(36) 
_{Mar}
(98) 
_{Apr}
(45) 
_{May}
(25) 
_{Jun}
(60) 
_{Jul}
(17) 
_{Aug}
(36) 
_{Sep}
(48) 
_{Oct}
(45) 
_{Nov}
(65) 
_{Dec}
(39) 
2011 
_{Jan}
(26) 
_{Feb}
(48) 
_{Mar}
(151) 
_{Apr}
(108) 
_{May}
(61) 
_{Jun}
(108) 
_{Jul}
(27) 
_{Aug}
(50) 
_{Sep}
(43) 
_{Oct}
(43) 
_{Nov}
(27) 
_{Dec}
(37) 
2012 
_{Jan}
(56) 
_{Feb}
(120) 
_{Mar}
(72) 
_{Apr}
(57) 
_{May}
(82) 
_{Jun}
(66) 
_{Jul}
(51) 
_{Aug}
(75) 
_{Sep}
(166) 
_{Oct}
(232) 
_{Nov}
(284) 
_{Dec}
(105) 
2013 
_{Jan}
(168) 
_{Feb}
(151) 
_{Mar}
(30) 
_{Apr}
(145) 
_{May}
(26) 
_{Jun}
(53) 
_{Jul}
(76) 
_{Aug}
(33) 
_{Sep}
(23) 
_{Oct}
(72) 
_{Nov}
(125) 
_{Dec}
(38) 
2014 
_{Jan}
(47) 
_{Feb}
(62) 
_{Mar}
(27) 
_{Apr}
(8) 
_{May}
(12) 
_{Jun}
(2) 
_{Jul}
(22) 
_{Aug}
(22) 
_{Sep}

_{Oct}
(17) 
_{Nov}
(20) 
_{Dec}
(12) 
2015 
_{Jan}
(25) 
_{Feb}
(2) 
_{Mar}
(16) 
_{Apr}
(13) 
_{May}
(21) 
_{Jun}
(5) 
_{Jul}
(1) 
_{Aug}
(8) 
_{Sep}
(9) 
_{Oct}
(30) 
_{Nov}
(8) 
_{Dec}

2016 
_{Jan}
(16) 
_{Feb}
(31) 
_{Mar}
(43) 
_{Apr}
(18) 
_{May}
(21) 
_{Jun}
(11) 
_{Jul}
(17) 
_{Aug}
(26) 
_{Sep}
(3) 
_{Oct}

_{Nov}

_{Dec}

S  M  T  W  T  F  S 







1

2
(2) 
3
(1) 
4

5
(1) 
6

7

8

9

10
(2) 
11

12

13

14

15

16

17
(3) 
18

19

20

21
(2) 
22

23

24

25

26
(4) 
27
(2) 
28

29

30







From: John Peterson <peterson@cf...>  20060427 17:58:11

Hi, We are putting together a libmesh paper currently. Please post references for any papers, conference articles, etc. you have written where libmesh was used. (submitted or accepted!) The best format is bibtex style but anything is fine. Thanks! John W. Peterson 
From: Steffen Petersen <steffen.petersen@tu...>  20060427 12:16:20

> On Wed, 26 Apr 2006, Steffen Petersen wrote: > > > I just tried to clean up some shape function files > > and code the Bernstein shapes in a more generic way > > (including arbitrary orders for quadrilaterals). > > Oh, and out of curiosity  can you make the high order Bernstein bases > hierarchic? (i.e. such that the basis functions at each node for > order p+1 begin with the basis functions at that node for order p) Unfortunately, that is not possible :( The reason why I still prefer the Bernstein shapes is that they give much better performance for acoustic computations at high wave numbers. However, there is probably no meaningfull reason to use nonhierarchic shapes with local p refinement. > Currently we only support padaptive constraints on elements with > hierarchic bases on side nodes, System::project_vector will only > handle p refinement correctly on elements with hierarchic bases on > every node, and I have no plans to do the work necessary to handle > more general cases anytime soon. ;) > > Of course, nonhierarchic highp elements are still useful, and just > getting the correct function numbering on a hierarchic basis can be > tricky (check out my shameful hack at src/utils/number_lookups.h > sometime), but I'd hate for you to start getting bad adaptive p > results without warning you about why. > > In fact, should I add a "hierarchic_bases" FEInterface function to let > the library know which elements do and don't have that characteristic? Seems reasonable to me. Steffen > It would allow the code to switch to the more expensive constraint and > projection calculations that nonhierarchic elements require in the > future, and would allow us to barf with a warning when someone tries > to do p adaptivity and p projections on elements which don't support > them today. >  > Roy > 
From: Steffen Petersen <steffen.petersen@tu...>  20060426 21:28:17

> > On Wed, 26 Apr 2006, Steffen Petersen wrote: > > > I just tried to clean up some shape function files > > and code the Bernstein shapes in a more generic way > > (including arbitrary orders for quadrilaterals). > > Although I have adapted the corresponding fe_ElemType.C > > (i.e. the n_dofs functions), > > I got this error for p > 6 > > > > ERROR: unsigned char too small to hold Elem::_p_level! > > Recompile with Elem:_p_level set to something bigger! > > [0] /home/mubsp/tmp/libmesh/include/geom/elem.h, line 1325, > compiled Apr 21 > > 2006 at 12:21:08 > > > > For different element types, the order seems to be > > fixed at different levels. Can you perhaps give > > me a quick hint? > > Because no element type can support arbitrarily high p (in fact, > floating point and matrix conditioning seem to kill us around p = 11), Jep, and (depending on the problem you solve) the poor convergence of the iterative solvers may also somewhat blow the benifit of higher orders. > I've put a max_order function in the FEInterface code to tell the > DoFMap just how high each element can go. If the base polynomial > degree + the p_level elevation exceeds this max order, the DoFMap > bumps the p_level down again. This obviously isn't a complete > solution (if p refinement is impossible, 99% of the time you'd prefer > h refinement to nothing) but it's a start. > > The problem comes in when the base polynomial degree exceeds that > max_order even with p_level() == 0  then the DofMap tries to drop the > p_level to 1 (which casts turn into 2^32  1 or so) and the p_level() > function freaks. > > I put an assert in dof_map.C this morning to prevent that misleading > error message from cropping up; should I add a more informative error > message to dof_map.C to take its place? Perhaps a message with a hint to the max_order() function. I was expecting something like this, but obviously needed a more direct hint. Thanks, Steffen > > Oh, and to summarize all the rambling: > In src/fe/fe_interface.C, edit the FEInterface::max_order() function > at the bottom of the file to return a higher number for the > BERNSTEIN/QUAD* case. >  > Roy > 
From: Roy Stogner <roystgnr@ic...>  20060426 21:23:26

On Wed, 26 Apr 2006, Steffen Petersen wrote: > I just tried to clean up some shape function files > and code the Bernstein shapes in a more generic way > (including arbitrary orders for quadrilaterals). Oh, and out of curiosity  can you make the high order Bernstein bases hierarchic? (i.e. such that the basis functions at each node for order p+1 begin with the basis functions at that node for order p) Currently we only support padaptive constraints on elements with hierarchic bases on side nodes, System::project_vector will only handle p refinement correctly on elements with hierarchic bases on every node, and I have no plans to do the work necessary to handle more general cases anytime soon. ;) Of course, nonhierarchic highp elements are still useful, and just getting the correct function numbering on a hierarchic basis can be tricky (check out my shameful hack at src/utils/number_lookups.h sometime), but I'd hate for you to start getting bad adaptive p results without warning you about why. In fact, should I add a "hierarchic_bases" FEInterface function to let the library know which elements do and don't have that characteristic? It would allow the code to switch to the more expensive constraint and projection calculations that nonhierarchic elements require in the future, and would allow us to barf with a warning when someone tries to do p adaptivity and p projections on elements which don't support them today.  Roy 
From: Roy Stogner <roystgnr@ic...>  20060426 21:08:53

On Wed, 26 Apr 2006, Steffen Petersen wrote: > I just tried to clean up some shape function files > and code the Bernstein shapes in a more generic way > (including arbitrary orders for quadrilaterals). > Although I have adapted the corresponding fe_ElemType.C > (i.e. the n_dofs functions), > I got this error for p > 6 > > ERROR: unsigned char too small to hold Elem::_p_level! > Recompile with Elem:_p_level set to something bigger! > [0] /home/mubsp/tmp/libmesh/include/geom/elem.h, line 1325, compiled Apr 21 > 2006 at 12:21:08 > > For different element types, the order seems to be > fixed at different levels. Can you perhaps give > me a quick hint? Because no element type can support arbitrarily high p (in fact, floating point and matrix conditioning seem to kill us around p = 11), I've put a max_order function in the FEInterface code to tell the DoFMap just how high each element can go. If the base polynomial degree + the p_level elevation exceeds this max order, the DoFMap bumps the p_level down again. This obviously isn't a complete solution (if p refinement is impossible, 99% of the time you'd prefer h refinement to nothing) but it's a start. The problem comes in when the base polynomial degree exceeds that max_order even with p_level() == 0  then the DofMap tries to drop the p_level to 1 (which casts turn into 2^32  1 or so) and the p_level() function freaks. I put an assert in dof_map.C this morning to prevent that misleading error message from cropping up; should I add a more informative error message to dof_map.C to take its place? Oh, and to summarize all the rambling: In src/fe/fe_interface.C, edit the FEInterface::max_order() function at the bottom of the file to return a higher number for the BERNSTEIN/QUAD* case.  Roy 
From: Steffen Petersen <steffen.petersen@tu...>  20060426 20:59:33

Roy, I just tried to clean up some shape function files and code the Bernstein shapes in a more generic way (including arbitrary orders for quadrilaterals). Although I have adapted the corresponding fe_ElemType.C (i.e. the n_dofs functions), I got this error for p > 6 ERROR: unsigned char too small to hold Elem::_p_level! Recompile with Elem:_p_level set to something bigger! [0] /home/mubsp/tmp/libmesh/include/geom/elem.h, line 1325, compiled Apr 21 2006 at 12:21:08 For different element types, the order seems to be fixed at different levels. Can you perhaps give me a quick hint? Steffen 
From: Roy Stogner <roystgnr@ic...>  20060421 20:10:39

On Fri, 21 Apr 2006, Roy Stogner wrote: > So, I turned basis flipping off entirely. Don't try using these > elements with p>2 on any meshes that don't come from a topologically > cartesian coarse grid with a selfconsistent orientation... at least > not until someone figures out how to fix it. False alarm  this new basis flipping bug appears to be one I introduced, and I think I've fixed it.  Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20060421 19:44:54

I've added support for p up to 11 for hierarchic triangles and hexes. Hexes are less useful than they should be, though, because even after I cheated the "how do you handle basis flipping when derivatives are involved" question by falling back on a finite difference stencil, I discovered that the basis flipping seemed to be broken even for the raw basis functions themselves. So, I turned basis flipping off entirely. Don't try using these elements with p>2 on any meshes that don't come from a topologically cartesian coarse grid with a selfconsistent orientation... at least not until someone figures out how to fix it. Even on such meshes, lower degree elements might be more practical. On my (smooth) test problem the errors were already around 1e6 with a few quintic elements, the solver iterations jumped by an order of magnitude with a few hexic (sextic? sexy?) elements, and it choked entirely on octics.  Roy 
From: Roy Stogner <roystgnr@ic...>  20060417 21:25:57

On Mon, 17 Apr 2006, Benjamin S. Kirk wrote: > Yes, but (elem>neighbor(n)>level() <= my_level) && > (elem>neighbor(n)>active()) > will not be true if the neighbor is refined. No, but neither will just (elem>neighbor(n)>active())  the level/my_level comparison is superfluous. >> Second, what happens when your neighbor is coarser than >> you are and is flagged for refinement? In that case you're not an >> unrefined island, but nothing in this test gets triggered. >> >> I'm committing the following replacement: >> >> if ((elem>neighbor(n) == NULL)  >> (elem>neighbor(n)>level() < my_level)  >> ((elem>neighbor(n)>active()) && >> (elem>neighbor(n)>refinement_flag() != Elem::REFINE)) >>  (elem>neighbor(n)>refinement_flag() == >> Elem::COARSEN_INACTIVE)) > > Go ahead, Okay, then. This leaves the first clause in for now, and just changes the second. The second clause is actually a bug we should fix; the first is just a debatable design choice that I'm not sure we should change. > I've seen bizarre enough meshes to make me question the original > implementation anyway. Do you have any examples offhand? I could definitely see some runaway badness resulting from this  if your immediate neighbor is coarser than you are, that's the last place you want to accidentally overrefine. I ran into this while trying to debug my own bizarre (adaptive hp runs but doesn't converge) results. I suppose it's always good to catch a bug, but this turned out to be a completely unrelated problem. :(  Roy 
From: Benjamin S. Kirk <benjamin.kirk@na...>  20060417 21:07:19

On Mon, 20060417 at 15:53 0500, Roy Stogner wrote: > When checking element neighbors, we don't force refinement on any > element that has at least one neighbor passing the following test: > > if (elem>neighbor(n) == NULL  > ((elem>neighbor(n)>level() <= my_level) && > (elem>neighbor(n)>active()) && > (elem>neighbor(n)>refinement_flag() != Elem::REFINE)) >  (elem>neighbor(n)>refinement_flag() == > Elem::COARSEN_INACTIVE)) > > I'm a little uncertain about the first clause  it may be good to > ignore boundary islands (peninsulas?) in 1D, but in 3D if a hex has > refinements on 5 sides and a boundary on the 6th, I'd say refine it. > OK, but make sure that we don't test NULL neighbors when removing the first clause. > My real problem is the second clause. First of all, shouldn't > (elem>neighbor(n)>level() <= my_level) always be true? Even if the > neighbor cell had greatgrandkids, the neighbor itself would still be > at my_level. Yes, but (elem>neighbor(n)>level() <= my_level) && (elem>neighbor(n)>active()) will not be true if the neighbor is refined. > Second, what happens when your neighbor is coarser than > you are and is flagged for refinement? In that case you're not an > unrefined island, but nothing in this test gets triggered. > > I'm committing the following replacement: > > if ((elem>neighbor(n) == NULL)  > (elem>neighbor(n)>level() < my_level)  > ((elem>neighbor(n)>active()) && > (elem>neighbor(n)>refinement_flag() != Elem::REFINE)) >  (elem>neighbor(n)>refinement_flag() == > Elem::COARSEN_INACTIVE)) > > If I'm misunderstanding something, someone stop me now. >  > Roy > Go ahead, I've seen bizarre enough meshes to make me question the original implementation anyway. Ben 
From: Roy Stogner <roystgnr@ic...>  20060417 20:53:25

When checking element neighbors, we don't force refinement on any element that has at least one neighbor passing the following test: if (elem>neighbor(n) == NULL  ((elem>neighbor(n)>level() <= my_level) && (elem>neighbor(n)>active()) && (elem>neighbor(n)>refinement_flag() != Elem::REFINE))  (elem>neighbor(n)>refinement_flag() == Elem::COARSEN_INACTIVE)) I'm a little uncertain about the first clause  it may be good to ignore boundary islands (peninsulas?) in 1D, but in 3D if a hex has refinements on 5 sides and a boundary on the 6th, I'd say refine it. My real problem is the second clause. First of all, shouldn't (elem>neighbor(n)>level() <= my_level) always be true? Even if the neighbor cell had greatgrandkids, the neighbor itself would still be at my_level. Second, what happens when your neighbor is coarser than you are and is flagged for refinement? In that case you're not an unrefined island, but nothing in this test gets triggered. I'm committing the following replacement: if ((elem>neighbor(n) == NULL)  (elem>neighbor(n)>level() < my_level)  ((elem>neighbor(n)>active()) && (elem>neighbor(n)>refinement_flag() != Elem::REFINE))  (elem>neighbor(n)>refinement_flag() == Elem::COARSEN_INACTIVE)) If I'm misunderstanding something, someone stop me now.  Roy 
From: Roy Stogner <roystgnr@ic...>  20060410 15:35:29

On Mon, 20060410 at 09:45 0500, Benjamin S. Kirk wrote: > ..wow, p>10! The condition number must be going to hell in a hurry? That was my first theory; even getting up that far required a few ILU steps in the preconditioner. There could be some inaccuracy in the shape functions, too, however; I've never thought much about FP error in such simple calculations as basis functions, but when you're dividing "xi to the huge power" over "huge factorial" it may be a problem. > Seriously, p>10?! Never thought I'd see the day in libMesh... ;) I'll make you a deal: fix the 3D hierarchics for p=3, and I'll give you p=10 there too! Figuring out how to extend your giant tableonumbers to arbitrary p wasn't hard, but I don't think I understand the edge and face flipping code well enough to fix it quickly. It's not just the sign of the derivatives that might need to be changed, like I thought before, it's the selection of which 1D shape function gets called for its derivative and which get called for values.  Roy Stogner <roystgnr@...> 
From: Benjamin S. Kirk <benjamin.kirk@na...>  20060410 14:56:19

On Mon, 20060403 at 10:41 0500, Roy Stogner wrote: > Everything has limitations when used with high polynomial orders; my > new 2D Hierarchic quads use the analytic derivatives, but they still > seem to become useless quickly for p>10. I ought to try recompiling > with long double support again, just to see how much farther they get > with a few digits extra precision. >  ..wow, p>10! The condition number must be going to hell in a hurry? Seriously, p>10?! Never thought I'd see the day in libMesh... ;) Ben 
From: Roy Stogner <roystgnr@ic...>  20060405 16:52:06

This email subject would now be more accurate as "Initial hp refinement=20 support committed to CVS". The only thing I haven't committed is the p and= =20 hp options I added to example 14; there's some 3D code mixed in with my=20 changes that would just be confusing until the 3D hierarchics are working. = =20 If anyone really wants the new ex14 soon, let me know and I'll get it clean= ed=20 up to add. On Wednesday 29 March 2006 2:26, I wrote: > On Wed, 29 Mar 2006, Benjamin S. Kirk wrote: > > Does > > > >> Initial p refinement support  uniform p refinement is working; > >> adaptive p and hp refinement has yet to be well tested. > > > > mean that hp refinement has been tested, but just not rigorously? > > Not even unrigorously. I've done some fiddling to run through parts > of the code, but I'd hoped to do the real tests on a real benchmark > problem... but I'm not sure what to do about error indicators! I > don't know how you can choose between h and p refinement with a simple > to implement code, much less how you can make that choice without > access to an element assembly function. Okay, for now I'm ignoring the hvsp problem. That will have to wait for = me=20 or another libMesh user to become more interested in hp as a research topic= ,=20 not just hp as something I've figured out how to implement easily. For=20 debugging purposes, I've just been doing both h and p refinement together, = so=20 technically hp refinement has been tested, but in practice code with singul= ar=20 solution derivatives will want to stick with h for now. > If all the code I put in is bugfree (it won't be), I'm sure it still isn't, but it's giving me solutions on hp meshes now. I'= m=20 going to take a break from it here, but if others want to do any testing I'= d=20 appreciate it. > and if our simple error indicators can fairly compare elements of differe= nt=20 > degree (here's hoping), =46rom what I've been reading, the answer here is that Kelly won't give fai= r=20 comparisons of different p elements, but the unfairness will probably be a= =20 small constant that won't break the adaptive loop. > and if you have a magic fairy guiding your h vs. p=20 > refinement choices (you don't), Ben's right that the patch recovery estimators may be the way for us to go= =20 here. Anything that gives a sufficiently better estimate for the solution= =20 local to an element will also give a good hp refinement scheme, just by=20 comparing the local error reduction vs. DoF increases for h vs. p refinemen= t. > and if your code doesn't accidentally assume constant p (our examples don= 't,=20 > but more complicated code might), I'm a little more optimistic here now that I've converted ex14; basically=20 anything that works with HIERARCHIC elements at all is likely to work with = p=20 or hp refined hierarchics  with the exception of infinite element code; I= =20 don't understand the InfFE classes well enough to feel safe upgrading them = to=20 hp. > and if you don't want p>6 in 1D, p>5 in 2D, p>3 or nonhexes=20 > in 3D (you will), then libMesh now supports hp refinement. I've bumped p up to 11 in 1D and for 2D quads for testing purposes; the cod= e=20 could go higher, but I'm not sure it could go higher accurately without=20 higher precision arithmetic. > > Maybe Derek is interested in scripting some stuff up & learning the > > capabilities of the library? > > I'll forward this to him just in case he's not reading the mailing > lists yet. Somehow volunteering other people for work never works out > for me as well as I hope, though. ;) Yeah, no luck there. I'll forward this to Derek too so maybe he'll feel=20 guilty. ;) =2D=20 Roy Stogner 
From: Roy Stogner <roystgnr@ic...>  20060403 15:42:04

On Sun, 2 Apr 2006, Steffen Petersen wrote: > This seems to be a bug in the 3D hierarchic shapes (and I just got poor > results from a convergence test for the HEX27 hierarchics). > It seems that no one has really used the 3D hierarchics so far. I was afraid of that. I used the 3D hierarchics once when trying to debug the 3D Hermites, and they didn't work... but I discovered a bug in my userlevel code, and after the Hermites started working I just assumed the same bug had been responsible for the weird hierarchics behavior. Ben expressed worry those long index lookup tables in fe_hierarchic_shape_3D.C might be wrong. Well, the lookup tables are just fine, but it looks like the real complication of the function is in those long lists of orientation changes. I'd like to implement arbitrary polynomial order for the 3D Hierarchics, and I can do most of the work easily, but I'd been hoping to just copy the orientationmatching code from the current cubic implementation. > For the 3D Berstein shapes I had adopted the finite differences > scheme that was used in libMesh to compute the shape derivatives for > triangular elements. For the moderate orders that had been implemented > so far this appears to work fine, but I guess it has limitations > when used with high polynimial orders. Everything has limitations when used with high polynomial orders; my new 2D Hierarchic quads use the analytic derivatives, but they still seem to become useless quickly for p>10. I ought to try recompiling with long double support again, just to see how much farther they get with a few digits extra precision.  Roy 
From: Steffen Petersen <steffen.petersen@tu...>  20060402 11:55:40

Roy Stogner schrieb: > > On the QUAD elements, we handled flipped edges by taking the negative > of the odd shape functions on those edges, which works fine. > > On the HEX27, it looks like we're handling flipped edges by mapping > xi to xi, eta to eta, or zeta to zeta, depending on the edge. > Obviously this mapping gives you the negative of odd shape > functions... but it doesn't give you the negative of their > derivatives! Am I missing something, or is this incorrect? This seems to be a bug in the 3D hierarchic shapes (and I just got poor results from a convergence test for the HEX27 hierarchics). It seems that no one has really used the 3D hierarchics so far. For the 3D Berstein shapes I had adopted the finite differences scheme that was used in libMesh to compute the shape derivatives for triangular elements. For the moderate orders that had been implemented so far this appears to work fine, but I guess it has limitations when used with high polynimial orders. Steffen > We now (or whenever SourceForge gets their CVS server working again, > anyway) have hierarchics implemented for arbitrary n on EDGE2/3 and > QUAD8/9 elements, by the way. They only seem to work well up to n=11 > or so, though; I'm not sure whether it's the floating point function > evaluations or the matrix conditioning that blows up there, but > something goes haywire. >  > Roy > > >  > This SF.Net email is sponsored by xPML, a groundbreaking scripting language > that extends applications into web and mobile media. Attend the live > webcast > and join the prime developer group breaking into this new coding territory! > http://sel.asus.falkag.net/sel?cmd=lnk&kid=110944&bid=241720&dat=121642 > _______________________________________________ > Libmeshdevel mailing list > Libmeshdevel@... > https://lists.sourceforge.net/lists/listinfo/libmeshdevel 
From: Roy Stogner <roystgnr@ic...>  20060402 00:08:46

On the QUAD elements, we handled flipped edges by taking the negative of the odd shape functions on those edges, which works fine. On the HEX27, it looks like we're handling flipped edges by mapping xi to xi, eta to eta, or zeta to zeta, depending on the edge. Obviously this mapping gives you the negative of odd shape functions... but it doesn't give you the negative of their derivatives! Am I missing something, or is this incorrect? We now (or whenever SourceForge gets their CVS server working again, anyway) have hierarchics implemented for arbitrary n on EDGE2/3 and QUAD8/9 elements, by the way. They only seem to work well up to n=11 or so, though; I'm not sure whether it's the floating point function evaluations or the matrix conditioning that blows up there, but something goes haywire.  Roy 