You can subscribe to this list here.
2003 |
Jan
(4) |
Feb
(1) |
Mar
(9) |
Apr
(2) |
May
(7) |
Jun
(1) |
Jul
(1) |
Aug
(4) |
Sep
(12) |
Oct
(8) |
Nov
(3) |
Dec
(4) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2004 |
Jan
(1) |
Feb
(21) |
Mar
(31) |
Apr
(10) |
May
(12) |
Jun
(15) |
Jul
(4) |
Aug
(6) |
Sep
(5) |
Oct
(11) |
Nov
(43) |
Dec
(13) |
2005 |
Jan
(25) |
Feb
(12) |
Mar
(49) |
Apr
(19) |
May
(104) |
Jun
(60) |
Jul
(10) |
Aug
(42) |
Sep
(15) |
Oct
(12) |
Nov
(6) |
Dec
(4) |
2006 |
Jan
(1) |
Feb
(6) |
Mar
(31) |
Apr
(17) |
May
(5) |
Jun
(95) |
Jul
(38) |
Aug
(44) |
Sep
(6) |
Oct
(8) |
Nov
(21) |
Dec
|
2007 |
Jan
(5) |
Feb
(46) |
Mar
(9) |
Apr
(23) |
May
(17) |
Jun
(51) |
Jul
(41) |
Aug
(4) |
Sep
(28) |
Oct
(71) |
Nov
(193) |
Dec
(20) |
2008 |
Jan
(46) |
Feb
(46) |
Mar
(18) |
Apr
(38) |
May
(14) |
Jun
(107) |
Jul
(50) |
Aug
(115) |
Sep
(84) |
Oct
(96) |
Nov
(105) |
Dec
(34) |
2009 |
Jan
(89) |
Feb
(93) |
Mar
(119) |
Apr
(73) |
May
(39) |
Jun
(51) |
Jul
(27) |
Aug
(8) |
Sep
(91) |
Oct
(90) |
Nov
(77) |
Dec
(67) |
2010 |
Jan
(25) |
Feb
(36) |
Mar
(98) |
Apr
(45) |
May
(25) |
Jun
(60) |
Jul
(17) |
Aug
(36) |
Sep
(48) |
Oct
(45) |
Nov
(65) |
Dec
(39) |
2011 |
Jan
(26) |
Feb
(48) |
Mar
(151) |
Apr
(108) |
May
(61) |
Jun
(108) |
Jul
(27) |
Aug
(50) |
Sep
(43) |
Oct
(43) |
Nov
(27) |
Dec
(37) |
2012 |
Jan
(56) |
Feb
(120) |
Mar
(72) |
Apr
(57) |
May
(82) |
Jun
(66) |
Jul
(51) |
Aug
(75) |
Sep
(166) |
Oct
(232) |
Nov
(284) |
Dec
(105) |
2013 |
Jan
(168) |
Feb
(151) |
Mar
(30) |
Apr
(145) |
May
(26) |
Jun
(53) |
Jul
(76) |
Aug
(33) |
Sep
(23) |
Oct
(72) |
Nov
(125) |
Dec
(38) |
2014 |
Jan
(47) |
Feb
(62) |
Mar
(27) |
Apr
(8) |
May
(12) |
Jun
(2) |
Jul
(22) |
Aug
(22) |
Sep
|
Oct
(17) |
Nov
(20) |
Dec
(12) |
2015 |
Jan
(25) |
Feb
(2) |
Mar
(16) |
Apr
(13) |
May
(21) |
Jun
(5) |
Jul
(1) |
Aug
(8) |
Sep
(9) |
Oct
(30) |
Nov
(8) |
Dec
|
2016 |
Jan
(16) |
Feb
(31) |
Mar
(43) |
Apr
(18) |
May
(21) |
Jun
(11) |
Jul
(17) |
Aug
(26) |
Sep
(4) |
Oct
(16) |
Nov
(5) |
Dec
(6) |
2017 |
Jan
(1) |
Feb
(2) |
Mar
(5) |
Apr
(4) |
May
(1) |
Jun
(11) |
Jul
(5) |
Aug
|
Sep
(3) |
Oct
(1) |
Nov
(7) |
Dec
|
2018 |
Jan
(8) |
Feb
(8) |
Mar
(1) |
Apr
|
May
(5) |
Jun
(11) |
Jul
|
Aug
(51) |
Sep
(3) |
Oct
|
Nov
|
Dec
|
2019 |
Jan
(2) |
Feb
|
Mar
(3) |
Apr
(7) |
May
(2) |
Jun
|
Jul
(6) |
Aug
|
Sep
|
Oct
(4) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: John P. <jwp...@gm...> - 2018-09-19 17:04:02
|
On Wed, Sep 19, 2018 at 10:30 AM John Peterson <jwp...@gm...> wrote: > > > On Wed, Sep 19, 2018 at 9:43 AM Tim Adowski <tra...@bu...> wrote: > >> Hello everyone: >> >> I am having an issue with the `Elem::contains_point()` function where it >> returns a false negative when the point tested is near the boundary of an >> element when using tol=libMesh::TOLERANCE. >> >> I modified introduction_ex1 on this branch >> <https://github.com/tradowsk/libmesh/blob/rayfire-issue/examples/introduction/introduction_ex1/introduction_ex1.C> to >> show this behavior. I've manually plotted the element nodes and the point >> to confirm that the point is indeed on the element, though right on an edge. >> >> tl;dr >> how can I best handle this behavior to avoid a false negative? >> Arbitrarily raising the tolerance until the failure goes away seems more >> like tuning the code to a specific mesh rather than trying to fix the >> underlying problem. >> >> ====== >> longer version >> >> This issue arose in development of the GRINS::RayfireMesh class. In >> short, the user supplies an origin point on a mesh boundary, and the class >> will walk in a straight line across the mesh in a given direction. On each >> element along the line, the class will calculate the points where it enters >> and leaves that element to create an EDGE2 element that gets stored >> internally. The problem with that calculation is verifying whether or not a >> calculated point is actually on the side/face of the given element (SECOND >> order elements use a newton solver due to the potential nonlinearity on the >> element sides/faces). >> >> The test case above is from a larger mesh where the rayfire traverses the >> z=0 mesh boundary. I found that using 10.0*libMesh::TOLERANCE for >> `contains_point()` allows the rayfire to complete successfully for this >> particular mesh, but I don't know if that tolerance value would work for an >> arbitrary mesh. >> >> Does anyone have a suggestion for how I can best handle these cases? In >> the test case above, I imagine the failure is due to the difference in >> z-components, even though they are all essentially zero. >> >> Any help is greatly appreciated! >> > > Would it be more correct for you to use Elem::close_to_point() instead of > Elem::contains_point() in your application? > > I haven't had a chance to try your example yet, but thanks for providing > it. I'm curious if you know whether the initial bounding box test is what > returns false, or the subsequent FEInterface::inverse_map() and > FEInterface::on_reference_element() checks? > I forgot to mention, the initial bounding box test in Elem::point_test() (which is called by Elem::contains_point()) is doing a relative check using hmax() to normalize, so in your case the signed z-coordinate comparisons are n(2) - p(2) <= hmax * box_tol n(2) - p(2) >= -hmax * box_tol and the right hand side of both of these is O(1.e-3 * 1.e-6) = 1.e-9. So if your point is outside the box by a slightly larger amount, e.g. 1.e-7, one or the other of these tests will fail... -- John |
From: John P. <jwp...@gm...> - 2018-09-19 16:31:15
|
On Wed, Sep 19, 2018 at 9:43 AM Tim Adowski <tra...@bu...> wrote: > Hello everyone: > > I am having an issue with the `Elem::contains_point()` function where it > returns a false negative when the point tested is near the boundary of an > element when using tol=libMesh::TOLERANCE. > > I modified introduction_ex1 on this branch > <https://github.com/tradowsk/libmesh/blob/rayfire-issue/examples/introduction/introduction_ex1/introduction_ex1.C> to > show this behavior. I've manually plotted the element nodes and the point > to confirm that the point is indeed on the element, though right on an edge. > > tl;dr > how can I best handle this behavior to avoid a false negative? Arbitrarily > raising the tolerance until the failure goes away seems more like tuning > the code to a specific mesh rather than trying to fix the underlying > problem. > > ====== > longer version > > This issue arose in development of the GRINS::RayfireMesh class. In short, > the user supplies an origin point on a mesh boundary, and the class will > walk in a straight line across the mesh in a given direction. On each > element along the line, the class will calculate the points where it enters > and leaves that element to create an EDGE2 element that gets stored > internally. The problem with that calculation is verifying whether or not a > calculated point is actually on the side/face of the given element (SECOND > order elements use a newton solver due to the potential nonlinearity on the > element sides/faces). > > The test case above is from a larger mesh where the rayfire traverses the > z=0 mesh boundary. I found that using 10.0*libMesh::TOLERANCE for > `contains_point()` allows the rayfire to complete successfully for this > particular mesh, but I don't know if that tolerance value would work for an > arbitrary mesh. > > Does anyone have a suggestion for how I can best handle these cases? In > the test case above, I imagine the failure is due to the difference in > z-components, even though they are all essentially zero. > > Any help is greatly appreciated! > Would it be more correct for you to use Elem::close_to_point() instead of Elem::contains_point() in your application? I haven't had a chance to try your example yet, but thanks for providing it. I'm curious if you know whether the initial bounding box test is what returns false, or the subsequent FEInterface::inverse_map() and FEInterface::on_reference_element() checks? -- John |
From: Tim A. <tra...@bu...> - 2018-09-19 15:43:01
|
Hello everyone: I am having an issue with the `Elem::contains_point()` function where it returns a false negative when the point tested is near the boundary of an element when using tol=libMesh::TOLERANCE. I modified introduction_ex1 on this branch <https://github.com/tradowsk/libmesh/blob/rayfire-issue/examples/introduction/introduction_ex1/introduction_ex1.C> to show this behavior. I've manually plotted the element nodes and the point to confirm that the point is indeed on the element, though right on an edge. tl;dr how can I best handle this behavior to avoid a false negative? Arbitrarily raising the tolerance until the failure goes away seems more like tuning the code to a specific mesh rather than trying to fix the underlying problem. ====== longer version This issue arose in development of the GRINS::RayfireMesh class. In short, the user supplies an origin point on a mesh boundary, and the class will walk in a straight line across the mesh in a given direction. On each element along the line, the class will calculate the points where it enters and leaves that element to create an EDGE2 element that gets stored internally. The problem with that calculation is verifying whether or not a calculated point is actually on the side/face of the given element (SECOND order elements use a newton solver due to the potential nonlinearity on the element sides/faces). The test case above is from a larger mesh where the rayfire traverses the z=0 mesh boundary. I found that using 10.0*libMesh::TOLERANCE for `contains_point()` allows the rayfire to complete successfully for this particular mesh, but I don't know if that tolerance value would work for an arbitrary mesh. Does anyone have a suggestion for how I can best handle these cases? In the test case above, I imagine the failure is due to the difference in z-components, even though they are all essentially zero. Any help is greatly appreciated! ~Tim tradowsk |
From: Roy S. <roy...@ic...> - 2018-08-31 17:57:00
|
On Fri, 31 Aug 2018, Derek Gaston wrote: > Instead of being like: "yeah - you're right, I can see that while I > like some of the things we got with the new system it does kinda > blow for some workflows... what are your ideas?" Actual quotes from me: >> I still dislike the Automake build system in libMesh. > > So do I. > I'd really love to be able to do out-of-source builds in MOOSE too. > If "autoderek" handled that (plus "make install"; I don't care as > much about dist/distcheck) it would be totally worth the switch from > my point of view. > There doesn't yet seem to be a build system that doesn't suck, so > it's not *entirely* ridiculous to envision reinventing the wheel > here. > I remember pre-automake libMesh too, and it was certainly better in > some ways, just much worse in some others. I still think the move > to automake was an improvement but we're definitely not at an > optimum. And from Paul: > If you do have the time and your efforts achieve my goals, Roy's > goals, Ben's goals, your goals, and it's more streamlined for > everyone, than that would be fucking awesome. > Of course it's not optimal. There is no optimal build system. Both > Roy and I's first response to this whole thread was all the build > systems suck. This seems to suck the least of what I've seen. If > there's something that sucks less that still has all the features we > want, we should try and use that. > I will be highly reluctant to move away from something that is > working and has been working for a long time until I'm satisfied > that features I use and count on are functional. But if you are > inclined to do it, by all means. versus imaginary quotes: > you guys are like > "Why do you not bow before the almighty Automake! It is pure GOD > and shall not be questioned!". The imaginary people you're arguing with sound like jerks. If that's the case, > Anyway: I'm honestly done with this conversation. is probably for the best. I'm not sure how I could have communicated better here. And that's just about the basic question of "what does Roy think of automake"! The failures to communicate about more important questions (e.g. there elapsed literally 35 minutes between me gushing about out-of-source builds as a killer feature and you putting out-of-source builds lowest on the priority list) are more serious, and that was roughly the point where I started to fear that this was going to be more of a flame war I should bow out of than a design discussion I should dive into. Even Paul's position (would adopt this if his feature list was kept but doesn't have time or funding to spare on it) is more negative than mine (We could possibly talk Rich into making it official and regardless I'm a sucker with my time); if there is a design discussion in the future I would still like to help. But this thread has not been evolving into that one; there's been actual progress we'll want to unearth from the acrimony later, but "later" should include some cooling-off time. --- Roy |
From: Kirk, B. (JSC-EG311) <ben...@na...> - 2018-08-31 17:34:11
|
For sure, I could care less about BSD make. Just trying to point out the rationale for why this is, and a strategy to work around. Peace. |
From: Derek G. <fri...@gm...> - 2018-08-31 16:56:38
|
Ben: at this point I think it's pretty crazy not to just target GNU Make. You can see from Jed's comments that that is exactly what PETSc is doing as well (which effectively means we are too). We can pick a reasonable minimum version and just require it. The world moves on. We moved to C++11 (and hopefully 14 soon). We've moved from MPI-1 to MPI-2 to MPI-3. There is no reason why we can't just test for GNU Make in configure and say that we require it at this point. Also: remember that most of our peers are requiring specific versions of CMake for building - and they are doing ok (it's a pain - but it's not the end of the world). Requiring GNU Make isn't that big of a deal. Seriously guys: I still don't understand all the negativity here. If you guys are THAT unwilling to change then we have bigger problems. This is not new - these have been complaints from day-1 that were ignored when you switched out something that in our (especially my) eyes "just worked". You ignored my complaints then (and through the years) to get a few features that you use - that to this day I have literally never used (and aren't necessary for one of the largest user-bases that use libMesh: MOOSE users) - yet you have no capability for empathy my direction when I finally say enough is enough and we need to find a new solution that works better for everyone (and I have even volunteered to develop it). Instead of being like: "yeah - you're right, I can see that while I like some of the things we got with the new system it does kinda blow for some workflows... what are your ideas?" you guys are like "Why do you not bow before the almighty Automake! It is pure GOD and shall not be questioned!". It's sick - and annoying. We're software developers: we highlight issues and make things better. We should be supportive when people want to improve things. Anyway: I'm honestly done with this conversation. I have real work to do - no reason to argue with negative brick walls. The next time you hear from me on this topic it will be because I have a Make based system that doesn't suck in a fork. If you don't want to merge it that's fine - we'll deal with it then. I wish I could have developed this _with_ you guys... but you lost out on that. Derek On Fri, Aug 31, 2018 at 11:05 AM Kirk, Benjamin (JSC-EG311) < ben...@na...> wrote: > > On Aug 31, 2018, at 9:42 AM, Paul T. Bauman <ptb...@gm...> wrote: > > Why is everyone so afraid of actually writing Make? I don't get it. >> > > No one in this thread is afraid of writing make. > > > wildcards in GNU make as required by that blob are explicitly not allowed > within automake because of the goal of supporting all makes. So that is > part of the issue, it is possible to write make but the targets must be > specified. > > A work around though could be a local hook, that rather than using > wildcards within make instead calls a shell script that does effectively > the same thing, > > |
From: Jed B. <je...@je...> - 2018-08-31 16:37:10
|
Derek Gaston <fri...@gm...> writes: >> It's necessary to catch errors if a symbol is removed from the library. >> Automake/make has no way to know if changes have that effect. >> > > The binaries that Roy is talking about aren't "test' binaries though. > They're just utilities that libMesh has grown over the years. I would love > to be able to turn them completely off. They do take a decent amount of > time to link - especially on slower networked filesystems. Oh, this is about the stuff in src/apps? Yeah, why not link those all into one executable that dispatches based on its name? It's a common pattern; that's what Busybox does (for an entire Linux userland), Clang (clang++, clang-cl, clang-cpp are symlinks to clang), TeX Live (lots of executables are symlinks to pdftex), xz (lzcat, lzma, unlzma, unxz, xzcat are all symlinks to xz), etc. Then you'd just have one link without needing a special configuration or target. |
From: Kirk, B. (JSC-EG311) <ben...@na...> - 2018-08-31 15:05:29
|
On Aug 31, 2018, at 9:42 AM, Paul T. Bauman <ptb...@gm...<mailto:ptb...@gm...>> wrote: Why is everyone so afraid of actually writing Make? I don't get it. No one in this thread is afraid of writing make. wildcards in GNU make as required by that blob are explicitly not allowed within automake because of the goal of supporting all makes. So that is part of the issue, it is possible to write make but the targets must be specified. A work around though could be a local hook, that rather than using wildcards within make instead calls a shell script that does effectively the same thing, |
From: Paul T. B. <ptb...@gm...> - 2018-08-31 14:42:40
|
On Fri, Aug 31, 2018 at 7:23 AM Derek Gaston <fri...@gm...> wrote: > On Thu, Aug 30, 2018 at 6:28 PM Paul T. Bauman <ptb...@gm...> wrote: > >> >> It just occurred to me that in fact there are autoconf macros that should >> do exactly this so configure should be able to generate the symlinks at >> configure time and remove the need to manually run a shell script for >> symlinking. I will look into this because this is super annoying. It will >> not remove the need to run the script to add the file names to the build >> system though. >> > > Building these with configure is (arguably) worse. I don't want to have > to run configure when I add a new header! > We're already running configure, I was just suggesting it could be easy to move step away from the run script using already existing bits in autotools. > No - these should be in the _Makefile_. You can write a generic rule that > will generate them. > For just doing the symlinks, this could possibly be done. One would need to look at how the dependencies are setup and who's using the symlinks in their build vs. the actual source files. We cannot get around adding the files names to the Makefile.am files. That should not trigger a reconfigure, just a run of automake before the build proceeds; if a reconfigure is triggered, we should try and fix that. > Why is everyone so afraid of actually writing Make? I don't get it. > No one in this thread is afraid of writing make. But we do not have time to rewrite a new build system from scratch that contains all the features we want. And that's effectively what you're proposing. If you do have the time and your efforts achieve my goals, Roy's goals, Ben's goals, your goals, and it's more streamlined for everyone, than that would be fucking awesome. I think it will be a huge time investment and what Roy and I are pointing out is that we're not willing to compromise on out-of-source builds, and install and check make targets in our builds. (I still think dist and distcheck targets are necessary, but it sounds like I'll get outvoted). So we would hate to see you sink several weeks into something that you're happy with and doesn't have these capabilities and it ultimately doesn't get adopted. And there need not be an all-or-nothing to this. If there are things we can try and fix now with the current system, we should do that. I will try and find time re: reconfiguring-that-shouldn't-be-needed. If it's easy to write make targets to build only one method, we could probably do that (Makefile.am files are ultimately just Makefiles, so we can add our own make rules); I'm not sure how easy it would be to modify the check and install targets to operate on a single METHOD, but if you don't need those, it might be possible. Here is the bit of Makefile I created for doing something similar for MOOSE: > > https://github.com/idaholab/moose/blob/devel/framework/moose.mk#L60-L93 > > It works for every header file - in every directory and symlinks them all > into another directory. > > It NEVER has to change: no matter if you add header files, add > directories, add sub-directories, rename directories... ANYTHING. It is > absolutely full-proof because it puts the symlink generation into the same > dependency resolution stream as everything else in the Makefile. > > This is what I'm talking about - create a good, generic Make-based build > system and it should never have to change (unless you actually want to > change the behavior of the build system itself). > This would be feasible for the build process, but almost certainly impossible for adding examples, make install, make check, etc. So part of whatever build system would need to be touched for these. > > Derek > |
From: Derek G. <fri...@gm...> - 2018-08-31 11:23:25
|
On Thu, Aug 30, 2018 at 6:28 PM Paul T. Bauman <ptb...@gm...> wrote: > > It just occurred to me that in fact there are autoconf macros that should > do exactly this so configure should be able to generate the symlinks at > configure time and remove the need to manually run a shell script for > symlinking. I will look into this because this is super annoying. It will > not remove the need to run the script to add the file names to the build > system though. > Building these with configure is (arguably) worse. I don't want to have to run configure when I add a new header! No - these should be in the _Makefile_. You can write a generic rule that will generate them. Why is everyone so afraid of actually writing Make? I don't get it. Here is the bit of Makefile I created for doing something similar for MOOSE: https://github.com/idaholab/moose/blob/devel/framework/moose.mk#L60-L93 It works for every header file - in every directory and symlinks them all into another directory. It NEVER has to change: no matter if you add header files, add directories, add sub-directories, rename directories... ANYTHING. It is absolutely full-proof because it puts the symlink generation into the same dependency resolution stream as everything else in the Makefile. This is what I'm talking about - create a good, generic Make-based build system and it should never have to change (unless you actually want to change the behavior of the build system itself). Derek |
From: Derek G. <fri...@gm...> - 2018-08-31 11:15:07
|
On Thu, Aug 30, 2018 at 5:12 PM Jed Brown <je...@je...> wrote: > Roy Stogner <roy...@ic...> writes: > It also parallelizes better because make has a flat and complete > dependency graph. Non-recursive make is much better. > Definitely! In MOOSE we actually create the entire list of of files to be compiled across multiple applications and up and down the application / library hierarchy so that we get max parallelism. libMesh's build is very "stuttery" by comparison (bursts of files are compiled simultaneously) and definitely leads to build slowdown. > It's necessary to catch errors if a symbol is removed from the library. > Automake/make has no way to know if changes have that effect. > The binaries that Roy is talking about aren't "test' binaries though. They're just utilities that libMesh has grown over the years. I would love to be able to turn them completely off. They do take a decent amount of time to link - especially on slower networked filesystems. > It is possible to link a bunch of examples together into one executable > that behaves differently depending on its name (for example). > I don't think that's critical for this case. We just don't need to build these binaries most of the time. Derek |
From: Derek G. <fri...@gm...> - 2018-08-31 11:08:31
|
As you can tell - I got pulled out to dinner last night with collaborators and never made it back to this discussion... Hopping back in now... On Thu, Aug 30, 2018 at 6:00 PM Paul T. Bauman <ptb...@gm...> wrote: > On Thu, Aug 30, 2018 at 5:06 PM Derek Gaston <fri...@gm...> wrote: > >> On Thu, Aug 30, 2018 at 4:36 PM Paul T. Bauman <ptb...@gm...> >> wrote: >> >>> You guys get the blame for this one. There was insistence from MOOSE >>> developers that the bootstrapped build system be included in the master >>> tree. I was against it. >>> >> >> No way. It wasn't in the old build system - and is in the new. YOU were >> the one who was trying to take away a feature (multiple simultaneous builds >> with different METHODS) that *everyone* was (and had been for a long time >> and still is) actually using to get features *you* wanted with Automake. >> > > If we didn't include the bootstrapped build system in the master tree, > there would be no diffs when the build system was updated, except for > Makefile.am or configure.ac, whichever piece was updated. MOOSE > developers insisted those be included in the new build system. Thus we have > separate bootstrap commits. > Oh damn - we got our argument all twisted up here! Looking back over the history - you were replying to a comment I made about how METHODS don't work very well in the automake system... but you were making a point about Makefile diffs. I missed the point you were making and replied with more force about METHODS :-) In summary from my argument: METHODS don't work well currently. It would be great to solve that with the new system. Switching over to your statements about bootstrapping: NO WAY! There is no way you can convince me that having an un-bootstrapped library is the way to go. We have literally thousands of people using MOOSE and they need to be able to build libMesh simply, reliably and quickly from the Git repo. If that wasn't the case we would have to maintain our own fork of libMesh that was bootstrapped or something. It would be a complete mess. I don't think any of the MOOSE developers or users ever (basically never) run bootstrap (excepting John). Committing the bootstrapped files, while annoying because they change, was definitely the right thing to do. Derek > |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 22:28:36
|
On Thu, Aug 30, 2018 at 1:32 PM Derek Gaston <fri...@gm...> wrote: > On Thu, Aug 30, 2018 at 11:54 AM Roy Stogner <roy...@ic...> > wrote: > >> >> On Thu, 30 Aug 2018, Derek Gaston wrote: >> >> Should we put one magic add_files.sh at the top level? >> > > I guess? I still don't understand why these are even necessary - there > should just be a build rule for the symlinks! > It just occurred to me that in fact there are autoconf macros that should do exactly this so configure should be able to generate the symlinks at configure time and remove the need to manually run a shell script for symlinking. I will look into this because this is super annoying. It will not remove the need to run the script to add the file names to the build system though. |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 22:19:45
|
On Thu, Aug 30, 2018 at 4:58 PM Derek Gaston <fri...@gm...> wrote: > > > On Thu, Aug 30, 2018 at 4:35 PM Paul T. Bauman <ptb...@gm...> wrote: > >> On Thu, Aug 30, 2018 at 3:20 PM Derek Gaston <fri...@gm...> wrote: >> >>> On Thu, Aug 30, 2018 at 3:02 PM Paul T. Bauman <ptb...@gm...> >>> wrote: >>> >>>> On Thu, Aug 30, 2018 at 1:32 PM Derek Gaston <fri...@gm...> >>>> wrote: >>>> >>>>> On Thu, Aug 30, 2018 at 11:54 AM Roy Stogner <roy...@ic... >>>>> > >>>>> >>>> >> That shouldn't happen. I'm betting that you made another change along >> side or it's an artifact of the symlinking bits (which needs to be fixed). >> The only time configure ever needs to be rerun is if you change your >> configuration you or you change how you configure your build ( >> configure.ac). When I add new files and hit `make` in my build tree, >> automake reruns and then proceeds with the build, regardless of .h or .C >> file additions. Even changing Makefile.am does not need a reconfigure. >> > > Really? Adding one header file and rerunning the scripts caused a full > reconfigure for me earlier. > I will try and find time this weekend to test this. If it causing a reconfigure, that should be fixed. > > >> >> >>> Any tiny change to mesh / equation_systems / etc. creates a whole >>> rebuild. I'd love to see statistics on how much this saves you. >>> >> >> Sure, because there is a lot of dependency in those files, but I claim >> most of the libMesh changes happen much farther downstream of that. In >> fact, rarely do I have to recompile more than a handful of libMesh files. >> > > A lot of my work these days is in parallel_implementation.h and > communicator.h. > Yeah, that will require rebuilding the whole tree. (BTW, thank you for your efforts here. I'm hoping that will help out scaling quite a bit.) > > Your denial of how much of libMesh you have to (not!) rebuild is just as > anecdotal as my denial of needing out-of-tree builds :-) > I just was pointing out that in fact there are many cases were I can rebuild small parts of the tree with changes. I did not assert that one never needs to rebuild the whole tree, just that it's less common. > > >> >> >>> In addition: MOOSE has never had it and I've only ever heard one person >>> complain (Roy). >>> >>> Most of the MOOSE developers (besides myself and John) simply run the >>> MOOSE script that we have to rebuild libMesh because they can't even be >>> bothered to deal with it manually. Before Automake that script never even >>> existed! >>> >> >> Again, in-source builds just worked for me literally just now. >> `configure; make` or just `make`. I'm not sure what else you're doing your >> script >> > > Building works - but did you try to use it? > > You can see my "install" script I have to manually run after each build > here: > https://github.com/friedmud/libmesh_scripts/blob/master/libmesh_install > > That is the part that I have to keep up to date (it breaks once or twice a > year - so not a huge deal). > > Huh? There are plenty of times already where devel passed and dbg didn't. >> Running these configurations in tandem always is critical. >> > > Sure - but _I_ don't have to run them. I let CI run them. > > >> How? It literally just did it and it worked fine. What is in your script >> besides `make install`? >> > > See above > > >> It shows adding a file should be streamlined. It's hyperbole to >> extrapolate that to saying the build system is broken when literally every >> feature of the current build system is exercised and is passing on CI. >> > > It's broken because it doesn't do the very most basic thing - allow you to > build the software as you change it. It some things you like well > (obviously) - but I don't understand how you can't see that it's > non-optimal for a really normal use-case of just adding a file / directory > / example. > Of course it's not optimal. There is no optimal build system. Both Roy and I's first response to this whole thread was all the build systems suck. This seems to suck the least of what I've seen. If there's something that sucks less that still has all the features we want, we should try and use that. > > >> -1. Rebuilding/linking of all the small executables every time >>> >> >> I agree, but that's easy to enable configure option for (which you'd need >> to do in any other build system anyway). >> > > I would rather they were just off by default and you could do a "make > apps" or something to build them. Or - you could go into each one and > build it individually by typing "make" if you want that one. > I would be fine with this. We can do this with the current build system. > > >> -2. In editor building broken (in multiple ways) >>> >> >> This may be valid. I don't do in editor building. >> > > That makes sense! :-) > Jed seems to think this works. > > >> -3. Pain to add any single new file (especially headers) >>> >> >> This was number 1 under negatives, no double-counting allowed. >> > > This one deserves to be counted twice :-P > > >> -4. Thousands of Makefiles >>> >> >> (A handful) That you never need to look at, touch, or otherwise modify? >> This is not a valid criticism. >> > > It is - because diffs get generated in them a lot - and they clutter up > the world. Only one Makefile should be necessary. > Even the most complex SCons builds (which is probably the best at handling doing really complex shit with builds IMHO) has multiple SConscripts in their trees. -4b. Makefiles in include directories cause in-editor building to get hung >>> up (I have to switch to a .C file and build from there) >>> >> >> I believe this is only because of the symlinking which I agree should go >> away. >> > > Symlinking can't go away. We want it there for the include path > "namespacing" > If we move all the headers into a libmesh folder in each include directory in the source tree, the symlinking could go away. I don't understand why we don't do that. > > >> -5. Creating new examples is a HUGE pain >>> >> >> It is annoying to get it to `make install` properly (and not hugely >> annoying it is mostly automated). Creating a new example is easy. >> > > Easy? You have to manually modify Makefile.am files (last time I did it > anyway - is there a script for that now?). > > >> >> >>> -6. Trying to use an example to try out a new feature (or editing an >>> existing example) also sucks. You have to use pretty funky command-lines >>> just to get a single executable re-run. >>> >> >> Huh? I'm not sure how it gets any easier then literally running the >> script that is installed along side the example. >> > > Why should I have to _install_ it? I just want to build it and run it > real quick. I'm _developing_... that has nothing to do with _installing_. > I want to build it in place (which should produce a nice executable that's > sitting *right there*) and then just run it! > For examples, I think that install is the correct focus. As a developer, I'm making an example for users to use, not to debug code I'm writing. I use tests for that. > Further, I don't want to run your script - I just want to run the > executable! What if I want to pass a different value for command-line > parameter? I need to edit the script? Why can't I just run the executable? > Most (all?) examples require an input file (GetPot). The run script lowers the barrier for users to run the example (I think) because they can run the script right away instead of reading what needs to be fed to the example executable. Of course you can run the executable on your own. > I guess that gets to the crux of my issue. _installing_ should only be > done by end users that want to put the library somewhere in their > filesystem to be linked against by multiple applications / users. When I'm > developing I want everything to just stay where I'm developing! > > I seriously don't get it. Do you actually WANT to work this way - or are > just defending it to defend it? If you could get out-of-tree builds, and > "make install"... without any of the hassle we have currently would > actually not want that because you really honestly like the current system? > Sure, that would be fantastic. Also make check, make dist, make distcheck. > I'm just trying to understand your motivation here. I'm honestly wanting > to make things better - you just keep throwing up roadblocks about why > you're fine with the status quo. > This whole thread started with comments from you wanting to strip out the build system and features that we use everyday in our development and that we consider to be essential. I'm pushing back on keeping the features that I consider vital. Of course I would love to just drop a new file in and not edit a fucking Makefile.am file. Or have it be easier to add an example. I'm not aware of an existing build system that achieves all of those. If you are willing to write one, then hell yeah. But it sure sounded like you would rather just go with the old and take away the features we added, which I absolutely do not want removed and, yeah, I'm going to push back on that. I think it would be a *significant* time investment to create such a system. > > Really: the current build system is _painful_. In the same way that you > think out-of-tree-builds are "vital": I'm telling you that the current > build system is so painful that I go WAY out of my way to avoid interacting > with it. I do just about everything I can in a downstream app just to > avoid having to deal with libMesh's build system. > > Can you just stop being defensive for a moment and think about the > possibility that the current system isn't awesome so we can have some > constructive dialogue about what we could do differently? > I never asserted ours was awesome. Merely that it sucks the least. I'd be happy to have a different build system that was more streamlined and still had all the features we've been discussing. > Derek > >> |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 22:00:52
|
On Thu, Aug 30, 2018 at 5:07 PM Roy Stogner <roy...@ic...> wrote: > > On Thu, 30 Aug 2018, Paul T. Bauman wrote: > > > I would love to see how it could be sped up. The vast majority of > > time in make install is spent in linking the library and there's no > > getting around that. > > Looping over directory after directory costs some time, and I've heard > a non-recursive automake setup would fix that. > It could, but I don't think it will make much of a difference, but maybe I'm underestimating. This is milliseconds compared to several seconds of linking (in the eyeball norm). Of course if we can do it, we should. For linking the library, IIRC most modern linkers operate in serial > unless you explicitly try to tell them to parallelize? That's > annoying and we could look into that if it bothers people. > That would be cool, I didn't know that was a thing. > I think Derek specifically referred to the re-linking of apps whenever > the library changes, though. That's definitely more paranoid than it > needs to be. Presumably to be compatible with static linking and > weird-HPUX-linkers and whatnot, automake forces a re-link whenever the > library changes, not just whenever the ABI changes? Correct. I'm curious to try if it still forces a relink if we only have shared linking and disabled static linking. > I prefer > too-paranoid over not-paranoid-enough, but it might be possible to > tweak the behavior to fall in-between. > At the very least, it should be easy to turn off building the apps at configure time. > --- > Roy > |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 22:00:36
|
On Thu, Aug 30, 2018 at 5:06 PM Derek Gaston <fri...@gm...> wrote: > On Thu, Aug 30, 2018 at 4:36 PM Paul T. Bauman <ptb...@gm...> wrote: > >> You guys get the blame for this one. There was insistence from MOOSE >> developers that the bootstrapped build system be included in the master >> tree. I was against it. >> > > No way. It wasn't in the old build system - and is in the new. YOU were > the one who was trying to take away a feature (multiple simultaneous builds > with different METHODS) that *everyone* was (and had been for a long time > and still is) actually using to get features *you* wanted with Automake. > If we didn't include the bootstrapped build system in the master tree, there would be no diffs when the build system was updated, except for Makefile.am or configure.ac, whichever piece was updated. MOOSE developers insisted those be included in the new build system. Thus we have separate bootstrap commits. > All we did was complain (in the same way you are now when I suggested we > remove build system features) that something we use everyday was going away. > Including the bootstrapped build system or not in the master tree has no effect on the METHODS feature. If we did not include the bootstrapped build system in the master tree there would be no diffs and no need for bootstrap commits. I would be in favor of removing the bootstrapped parts of the build system from the master tree. > > I will agree that what we ended up with was a compromise that isn't the > best. I'm not saying we try to fix it within the current build system... > but it is something I would like to fix in a new one. > > Derek > |
From: Jed B. <je...@je...> - 2018-08-30 21:39:14
|
Roy Stogner <roy...@ic...> writes: > ... Is this something that normal people do in practice? Or is it > just the sort of thing an Obfuscated C Code Contest winner does when > language-only madness gets too boring and no longer calms the shakes? It's mainly used for testing, especially if you need to use static libraries (where each test executable is kinda big). I prototyped it for PETSc a while back and will finish at some point. (PETSc currently needs to delete executables as it goes when running tests because the sum of all test executables would blow quotas on many systems. This limits parallelism. A single executable would rectify that.) The feature (in a clumsy manual sense) has been in CMake/CTest for a long time. https://baoilleach.blogspot.com/2013/06/using-ctest-with-multiple-tests-in.html |
From: Roy S. <roy...@ic...> - 2018-08-30 21:24:39
|
On Thu, 30 Aug 2018, Jed Brown wrote: > Roy Stogner <roy...@ic...> writes: > >> I think Derek specifically referred to the re-linking of apps whenever >> the library changes, though. That's definitely more paranoid than it >> needs to be. > > It's necessary to catch errors if a symbol is removed from the library. > Automake/make has no way to know if changes have that effect. Gah, good point. It still might be nice to give users some "make install_library_only" option for regular use. > It is possible to link a bunch of examples together into one executable > that behaves differently depending on its name (for example). ... Is this something that normal people do in practice? Or is it just the sort of thing an Obfuscated C Code Contest winner does when language-only madness gets too boring and no longer calms the shakes? --- Roy |
From: Jed B. <je...@je...> - 2018-08-30 21:12:33
|
Roy Stogner <roy...@ic...> writes: > On Thu, 30 Aug 2018, Paul T. Bauman wrote: > >> I would love to see how it could be sped up. The vast majority of >> time in make install is spent in linking the library and there's no >> getting around that. > > Looping over directory after directory costs some time, and I've heard > a non-recursive automake setup would fix that. It also parallelizes better because make has a flat and complete dependency graph. Non-recursive make is much better. > For linking the library, IIRC most modern linkers operate in serial > unless you explicitly try to tell them to parallelize? That's > annoying and we could look into that if it bothers people. > > I think Derek specifically referred to the re-linking of apps whenever > the library changes, though. That's definitely more paranoid than it > needs to be. It's necessary to catch errors if a symbol is removed from the library. Automake/make has no way to know if changes have that effect. It is possible to link a bunch of examples together into one executable that behaves differently depending on its name (for example). |
From: Roy S. <roy...@ic...> - 2018-08-30 21:07:09
|
On Thu, 30 Aug 2018, Paul T. Bauman wrote: > I would love to see how it could be sped up. The vast majority of > time in make install is spent in linking the library and there's no > getting around that. Looping over directory after directory costs some time, and I've heard a non-recursive automake setup would fix that. For linking the library, IIRC most modern linkers operate in serial unless you explicitly try to tell them to parallelize? That's annoying and we could look into that if it bothers people. I think Derek specifically referred to the re-linking of apps whenever the library changes, though. That's definitely more paranoid than it needs to be. Presumably to be compatible with static linking and weird-HPUX-linkers and whatnot, automake forces a re-link whenever the library changes, not just whenever the ABI changes? I prefer too-paranoid over not-paranoid-enough, but it might be possible to tweak the behavior to fall in-between. --- Roy |
From: Derek G. <fri...@gm...> - 2018-08-30 21:06:53
|
On Thu, Aug 30, 2018 at 4:36 PM Paul T. Bauman <ptb...@gm...> wrote: > You guys get the blame for this one. There was insistence from MOOSE > developers that the bootstrapped build system be included in the master > tree. I was against it. > No way. It wasn't in the old build system - and is in the new. YOU were the one who was trying to take away a feature (multiple simultaneous builds with different METHODS) that *everyone* was (and had been for a long time and still is) actually using to get features *you* wanted with Automake. All we did was complain (in the same way you are now when I suggested we remove build system features) that something we use everyday was going away. I will agree that what we ended up with was a compromise that isn't the best. I'm not saying we try to fix it within the current build system... but it is something I would like to fix in a new one. Derek |
From: Jed B. <je...@je...> - 2018-08-30 21:05:25
|
"Paul T. Bauman" <ptb...@gm...> writes: >> No need to modify Makefiles. No need to run scripts - all done within my >> editor with source linking to compile errors. Not currently possible. >> > > Again, the building within the editor may be an issue. I'm willing to bet > it's fixable given the number of developers that use emacs together with > the literally thousands of Linux packages that use autotools as their build > system. I don't understand the issue Derek has (I often build in Emacs). Perhaps it's necessary to pass absolute paths to the compiler so that you don't need to tell the editor where to base relative paths searches? |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 21:00:00
|
On Thu, Aug 30, 2018 at 3:50 PM Derek Gaston <fri...@gm...> wrote: > What I mean is: what if you guys could have the things you like... and we > could also have a sane, clean system that doesn't require all the BS we > currently have to deal with when modifying libMesh? > Of course I would have no problem with a system that has all the features I want. I do not have time to work on it. I do not have money to spend on it. I will be highly reluctant to move away from something that is working and has been working for a long time until I'm satisfied that features I use and count on are functional. But if you are inclined to do it, by all means. Matt has also mentioned with Jed with GNU Make. That sounds like it good be a good path to go down if someone is looking to redo the build system. > > It is definitely possible. > > Or: like I said earlier, maybe we just maintain two systems. PETSc has > had several different build systems all at the same time because developers > wanted different things. It definitely could be true that a really simple > pure Make system for everyone that doesn't need the fancy features of > Automake and then Automake for Roy and Paul might be the way to go. > > Derek > > On Thu, Aug 30, 2018 at 3:44 PM Derek Gaston <fri...@gm...> wrote: > >> Both of you claim to use these features - fine - I believe that you do... >> but how many others? >> >> However, it's obvious that you're both passionate about these... so I >> will relent and agree that whatever build system we come up with has to >> have these features. Fine. >> >> Can you agree with me that the current build system adds friction for >> everyone that doesn't use these features? I think you guys are so used to >> how slow and laborious it is that you don't even realize how much better it >> could be. >> >> This happened the other day with John and I: I was complaining about how >> slow "make install" is for libMesh. John said "what do you mean, seems >> fine to me"... after a bit of chatting it was clear that he was just so >> used to it that it didn't phase him (he just types "make install" and let's >> it do its thing)... but that doesn't mean that the inefficiency doesn't >> exist! >> >> Derek >> >> On Thu, Aug 30, 2018 at 3:15 PM Roy Stogner <roy...@ic...> >> wrote: >> >>> >>> On Thu, 30 Aug 2018, Derek Gaston wrote: >>> >>> > I would rather fix the core development cycle - then backfill features >>> based on priority (install > check > dist > out-of-tree, etc.) >>> >>> out-of-tree > install > check > dist. >>> >>> > completely chucked a sane development flow for the sake of a few >>> features that are rarely actually used. >>> >>> By "rarely" do you mean "literally all the time, for years, >>> indispensibly"? Being able to easily build multiple configurations of >>> the same working tree is incredibly useful. The inability to do this >>> as easily with MOOSE has cost in man-hours of both workarounds (when I >>> maintain multiple trees to test different software stacks) and errors >>> (when I don't have room to do so or time to go back-and-forth and one >>> configuration or stack regresses). >>> --- >>> Roy >>> >> |
From: Derek G. <fri...@gm...> - 2018-08-30 20:58:26
|
On Thu, Aug 30, 2018 at 4:35 PM Paul T. Bauman <ptb...@gm...> wrote: > On Thu, Aug 30, 2018 at 3:20 PM Derek Gaston <fri...@gm...> wrote: > >> On Thu, Aug 30, 2018 at 3:02 PM Paul T. Bauman <ptb...@gm...> >> wrote: >> >>> On Thu, Aug 30, 2018 at 1:32 PM Derek Gaston <fri...@gm...> wrote: >>> >>>> On Thu, Aug 30, 2018 at 11:54 AM Roy Stogner <roy...@ic... >>>> > >>>> >>> > That shouldn't happen. I'm betting that you made another change along side > or it's an artifact of the symlinking bits (which needs to be fixed). The > only time configure ever needs to be rerun is if you change your > configuration you or you change how you configure your build (configure.ac). > When I add new files and hit `make` in my build tree, automake reruns and > then proceeds with the build, regardless of .h or .C file additions. Even > changing Makefile.am does not need a reconfigure. > Really? Adding one header file and rerunning the scripts caused a full reconfigure for me earlier. > > >> Any tiny change to mesh / equation_systems / etc. creates a whole >> rebuild. I'd love to see statistics on how much this saves you. >> > > Sure, because there is a lot of dependency in those files, but I claim > most of the libMesh changes happen much farther downstream of that. In > fact, rarely do I have to recompile more than a handful of libMesh files. > A lot of my work these days is in parallel_implementation.h and communicator.h. Your denial of how much of libMesh you have to (not!) rebuild is just as anecdotal as my denial of needing out-of-tree builds :-) > > >> In addition: MOOSE has never had it and I've only ever heard one person >> complain (Roy). >> >> Most of the MOOSE developers (besides myself and John) simply run the >> MOOSE script that we have to rebuild libMesh because they can't even be >> bothered to deal with it manually. Before Automake that script never even >> existed! >> > > Again, in-source builds just worked for me literally just now. `configure; > make` or just `make`. I'm not sure what else you're doing your script > Building works - but did you try to use it? You can see my "install" script I have to manually run after each build here: https://github.com/friedmud/libmesh_scripts/blob/master/libmesh_install That is the part that I have to keep up to date (it breaks once or twice a year - so not a huge deal). Huh? There are plenty of times already where devel passed and dbg didn't. > Running these configurations in tandem always is critical. > Sure - but _I_ don't have to run them. I let CI run them. > How? It literally just did it and it worked fine. What is in your script > besides `make install`? > See above > It shows adding a file should be streamlined. It's hyperbole to > extrapolate that to saying the build system is broken when literally every > feature of the current build system is exercised and is passing on CI. > It's broken because it doesn't do the very most basic thing - allow you to build the software as you change it. It some things you like well (obviously) - but I don't understand how you can't see that it's non-optimal for a really normal use-case of just adding a file / directory / example. > -1. Rebuilding/linking of all the small executables every time >> > > I agree, but that's easy to enable configure option for (which you'd need > to do in any other build system anyway). > I would rather they were just off by default and you could do a "make apps" or something to build them. Or - you could go into each one and build it individually by typing "make" if you want that one. > -2. In editor building broken (in multiple ways) >> > > This may be valid. I don't do in editor building. > That makes sense! :-) > -3. Pain to add any single new file (especially headers) >> > > This was number 1 under negatives, no double-counting allowed. > This one deserves to be counted twice :-P > -4. Thousands of Makefiles >> > > (A handful) That you never need to look at, touch, or otherwise modify? > This is not a valid criticism. > It is - because diffs get generated in them a lot - and they clutter up the world. Only one Makefile should be necessary. > -4b. Makefiles in include directories cause in-editor building to get hung >> up (I have to switch to a .C file and build from there) >> > > I believe this is only because of the symlinking which I agree should go > away. > Symlinking can't go away. We want it there for the include path "namespacing" > -5. Creating new examples is a HUGE pain >> > > It is annoying to get it to `make install` properly (and not hugely > annoying it is mostly automated). Creating a new example is easy. > Easy? You have to manually modify Makefile.am files (last time I did it anyway - is there a script for that now?). > > >> -6. Trying to use an example to try out a new feature (or editing an >> existing example) also sucks. You have to use pretty funky command-lines >> just to get a single executable re-run. >> > > Huh? I'm not sure how it gets any easier then literally running the script > that is installed along side the example. > Why should I have to _install_ it? I just want to build it and run it real quick. I'm _developing_... that has nothing to do with _installing_. I want to build it in place (which should produce a nice executable that's sitting *right there*) and then just run it! Further, I don't want to run your script - I just want to run the executable! What if I want to pass a different value for command-line parameter? I need to edit the script? Why can't I just run the executable? I guess that gets to the crux of my issue. _installing_ should only be done by end users that want to put the library somewhere in their filesystem to be linked against by multiple applications / users. When I'm developing I want everything to just stay where I'm developing! I seriously don't get it. Do you actually WANT to work this way - or are just defending it to defend it? If you could get out-of-tree builds, and "make install"... without any of the hassle we have currently would actually not want that because you really honestly like the current system? I'm just trying to understand your motivation here. I'm honestly wanting to make things better - you just keep throwing up roadblocks about why you're fine with the status quo. Really: the current build system is _painful_. In the same way that you think out-of-tree-builds are "vital": I'm telling you that the current build system is so painful that I go WAY out of my way to avoid interacting with it. I do just about everything I can in a downstream app just to avoid having to deal with libMesh's build system. Can you just stop being defensive for a moment and think about the possibility that the current system isn't awesome so we can have some constructive dialogue about what we could do differently? Derek > |
From: Paul T. B. <ptb...@gm...> - 2018-08-30 20:55:33
|
On Thu, Aug 30, 2018 at 3:44 PM Derek Gaston <fri...@gm...> wrote: > Both of you claim to use these features - fine - I believe that you do... > but how many others? > > However, it's obvious that you're both passionate about these... so I will > relent and agree that whatever build system we come up with has to have > these features. Fine. > Thank you. > Can you agree with me that the current build system adds friction for > everyone that doesn't use these features? > To a degree. I agree that having to individually specify a file in the build system to be added is stupid and shouldn't be needed. I do not agree that out-of-source builds, make install, make check, make dist, make distcheck adds any friction. You can do an in-source build and do all of those things. There is no friction. I agree that it should be easier to add an example. > I think you guys are so used to how slow and laborious it is that you > don't even realize how much better it could be. > > This happened the other day with John and I: I was complaining about how > slow "make install" is for libMesh. John said "what do you mean, seems > fine to me"... after a bit of chatting it was clear that he was just so > used to it that it didn't phase him (he just types "make install" and let's > it do its thing)... but that doesn't mean that the inefficiency doesn't > exist! > I would love to see how it could be sped up. The vast majority of time in make install is spent in linking the library and there's no getting around that. > > Derek > > On Thu, Aug 30, 2018 at 3:15 PM Roy Stogner <roy...@ic...> > wrote: > >> >> On Thu, 30 Aug 2018, Derek Gaston wrote: >> >> > I would rather fix the core development cycle - then backfill features >> based on priority (install > check > dist > out-of-tree, etc.) >> >> out-of-tree > install > check > dist. >> >> > completely chucked a sane development flow for the sake of a few >> features that are rarely actually used. >> >> By "rarely" do you mean "literally all the time, for years, >> indispensibly"? Being able to easily build multiple configurations of >> the same working tree is incredibly useful. The inability to do this >> as easily with MOOSE has cost in man-hours of both workarounds (when I >> maintain multiple trees to test different software stacks) and errors >> (when I don't have room to do so or time to go back-and-forth and one >> configuration or stack regresses). >> --- >> Roy >> > |