You can subscribe to this list here.
2001 |
Jan
|
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
(58) |
Nov
(95) |
Dec
(55) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2002 |
Jan
(205) |
Feb
(106) |
Mar
(36) |
Apr
(25) |
May
(34) |
Jun
(36) |
Jul
(161) |
Aug
(66) |
Sep
(100) |
Oct
(62) |
Nov
(77) |
Dec
(172) |
2003 |
Jan
(101) |
Feb
(202) |
Mar
(191) |
Apr
(97) |
May
(27) |
Jun
(21) |
Jul
(16) |
Aug
(55) |
Sep
(155) |
Oct
(166) |
Nov
(19) |
Dec
(134) |
2004 |
Jan
(569) |
Feb
(367) |
Mar
(81) |
Apr
(62) |
May
(124) |
Jun
(77) |
Jul
(85) |
Aug
(80) |
Sep
(66) |
Oct
(42) |
Nov
(20) |
Dec
(133) |
2005 |
Jan
(192) |
Feb
(143) |
Mar
(183) |
Apr
(128) |
May
(136) |
Jun
(18) |
Jul
(22) |
Aug
(33) |
Sep
(20) |
Oct
(12) |
Nov
(80) |
Dec
(44) |
2006 |
Jan
(42) |
Feb
(38) |
Mar
(17) |
Apr
(112) |
May
(220) |
Jun
(67) |
Jul
(96) |
Aug
(214) |
Sep
(104) |
Oct
(67) |
Nov
(150) |
Dec
(103) |
2007 |
Jan
(111) |
Feb
(50) |
Mar
(113) |
Apr
(19) |
May
(32) |
Jun
(34) |
Jul
(61) |
Aug
(103) |
Sep
(75) |
Oct
(99) |
Nov
(102) |
Dec
(40) |
2008 |
Jan
(86) |
Feb
(56) |
Mar
(104) |
Apr
(50) |
May
(45) |
Jun
(64) |
Jul
(71) |
Aug
(147) |
Sep
(132) |
Oct
(176) |
Nov
(46) |
Dec
(136) |
2009 |
Jan
(159) |
Feb
(136) |
Mar
(188) |
Apr
(189) |
May
(166) |
Jun
(97) |
Jul
(160) |
Aug
(235) |
Sep
(163) |
Oct
(46) |
Nov
(99) |
Dec
(54) |
2010 |
Jan
(104) |
Feb
(121) |
Mar
(153) |
Apr
(75) |
May
(138) |
Jun
(63) |
Jul
(61) |
Aug
(27) |
Sep
(93) |
Oct
(63) |
Nov
(40) |
Dec
(102) |
2011 |
Jan
(52) |
Feb
(26) |
Mar
(61) |
Apr
(27) |
May
(33) |
Jun
(43) |
Jul
(37) |
Aug
(53) |
Sep
(58) |
Oct
(63) |
Nov
(67) |
Dec
(16) |
2012 |
Jan
(97) |
Feb
(34) |
Mar
(6) |
Apr
(18) |
May
(32) |
Jun
(9) |
Jul
(17) |
Aug
(78) |
Sep
(24) |
Oct
(101) |
Nov
(31) |
Dec
(7) |
2013 |
Jan
(44) |
Feb
(35) |
Mar
(59) |
Apr
(17) |
May
(29) |
Jun
(38) |
Jul
(48) |
Aug
(46) |
Sep
(74) |
Oct
(140) |
Nov
(94) |
Dec
(177) |
2014 |
Jan
(94) |
Feb
(74) |
Mar
(75) |
Apr
(63) |
May
(24) |
Jun
(1) |
Jul
(30) |
Aug
(112) |
Sep
(78) |
Oct
(137) |
Nov
(60) |
Dec
(17) |
2015 |
Jan
(128) |
Feb
(254) |
Mar
(273) |
Apr
(137) |
May
(181) |
Jun
(157) |
Jul
(83) |
Aug
(34) |
Sep
(26) |
Oct
(9) |
Nov
(24) |
Dec
(43) |
2016 |
Jan
(94) |
Feb
(77) |
Mar
(83) |
Apr
(19) |
May
(39) |
Jun
(1) |
Jul
(5) |
Aug
(10) |
Sep
(28) |
Oct
(34) |
Nov
(82) |
Dec
(301) |
2017 |
Jan
(53) |
Feb
(50) |
Mar
(11) |
Apr
(15) |
May
(23) |
Jun
(36) |
Jul
(84) |
Aug
(90) |
Sep
(35) |
Oct
(81) |
Nov
(13) |
Dec
(11) |
2018 |
Jan
(15) |
Feb
(4) |
Mar
(2) |
Apr
(2) |
May
|
Jun
(6) |
Jul
(4) |
Aug
(13) |
Sep
(31) |
Oct
(4) |
Nov
(25) |
Dec
(64) |
2019 |
Jan
(7) |
Feb
(4) |
Mar
|
Apr
|
May
(13) |
Jun
(8) |
Jul
(16) |
Aug
(7) |
Sep
(27) |
Oct
(1) |
Nov
|
Dec
|
2020 |
Jan
|
Feb
|
Mar
(2) |
Apr
|
May
(8) |
Jun
(1) |
Jul
(4) |
Aug
|
Sep
(3) |
Oct
(2) |
Nov
(4) |
Dec
(3) |
2021 |
Jan
(1) |
Feb
|
Mar
|
Apr
(1) |
May
|
Jun
(2) |
Jul
(9) |
Aug
(3) |
Sep
|
Oct
(8) |
Nov
(4) |
Dec
|
2022 |
Jan
|
Feb
(6) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
(3) |
Dec
(8) |
2023 |
Jan
(6) |
Feb
|
Mar
(1) |
Apr
(2) |
May
(10) |
Jun
(7) |
Jul
|
Aug
(5) |
Sep
|
Oct
|
Nov
|
Dec
|
2024 |
Jan
|
Feb
|
Mar
(1) |
Apr
|
May
|
Jun
(1) |
Jul
|
Aug
(1) |
Sep
(9) |
Oct
|
Nov
|
Dec
|
From: Phil R. <p.d...@gm...> - 2015-05-30 11:11:22
|
Hi All I have just pushed some changes allowing wxWidgets driver to generate text sizes for layout purposes. However for use with wxPLViewer (i.e. when wxWidgets driver is used from the command line) this involves multiple checks backwards and forward. this is pretty slow. I have checked example 26 and this works fine, but takes a couple of seconds to render. I'd like some comments on whether people think that the speed is an issue, especially if they have any real world examples they could test. Note that when using PLPlot within your own wxWidgets application this isn't an issue as the wxPLViewer isn't used and the driver can check the text size itself. Phil |
From: Orion P. <or...@co...> - 2015-05-29 20:37:19
|
On 05/22/2015 06:57 PM, Alan W. Irwin wrote: > On 2015-05-22 11:05-0600 Orion Poplawski wrote: > >> with current plplot git on Fedora rawhide with cmake 3.2.2, I'm getting: >> >> -- PDL_VERSION = 2.007 >> CMake Error at cmake/modules/plplot_functions.cmake:181 (math): >> math cannot parse the expression: "2*1000000 + 007*1000 + 2.007": syntax >> error, unexpected exp_NUMBER, expecting $end (28) >> Call Stack (most recent call first): >> cmake/modules/pdl.cmake:63 (transform_version) >> cmake/modules/plplot.cmake:489 (include) >> CMakeLists.txt:111 (include) >> >> I don't normally build on a system with PDL installed, so I didn't see this >> before. > > Commit 38ea064 should fix this, but please check and let me know whether > or not that is the case. > > Alan Looks good: -- PDL_VERSION = 2.008 thanks -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA, Boulder/CoRA Office FAX: 303-415-9702 3380 Mitchell Lane or...@nw... Boulder, CO 80301 http://www.nwra.com |
From: Phil R. <p.d...@gm...> - 2015-05-28 09:50:17
|
Hi Jim Sorry, when I suggested a plmalloc and plfree I meant for these to call free and malloc, plus do some extra bookkeeping for us to keep track of our garbage collection needs, not as functions to interact directly with the OS to bypass free and malloc I think the following could be a useful model. If in plstream we have variables something like void **tempMem; PLINT nTempMemArrays; void **nextTempMemArray; then a function plmalloc, which would malloc the needed memory, assign it to nextTempMemArray, increment nextTempMemArray ready for the next allocation and if needed increase the size of tempMem. plfree would free the memory, search tempMem and set that element to NULL, possibly rolling back nextTempMemArray to the point after the last non-null pointer. Then when we return from an API function we can call a function something like plCleanMem which would free all memory from the tempMem array. In fact there would really be little need to call plfree unless the memory used is large and we are worried about resources - before returning to the calling program everything would be cleaned up anyway. Even without a setjmp/longjmp error handling mechanism, this seems like a useful way for us to manage memory in plplot and avoid memory leaks. I have heard about using memory pools before, but what happens when the pool runs out, does the whole pool need reallocating? I think the method above does what we need with minimal fuss, but am very open to other suggestions - this is all new to me in a C context. Comments anyone? The only issue I can think of is rentrancy. If we call an API function internally then it will free all the memory before returning, some of which is likely to still be in use. So something like this would need us to separate out the public API from internal functions which do the same job - the public API would just call the internal function. Does anyone know if this would affect the other language bindings? With this in place, using setjmp/longjmp seems to give us a trivial way to remove all our plexit calls. Phil On 26 May 2015 at 21:17, Jim Dishaw <ji...@di...> wrote: > > > > >> On May 26, 2015, at 3:53 PM, Phil Rosenberg <p.d...@gm...> wrote: >> >> Okay, well it was an option I thought I would put out there as I think >> it was worth considering. I will try to answer the questions people >> had anyway in case people are interested. >> >> Regarding domestic bindings, the C front end would remain. Although >> the API would have to change if we wanted to return an error code. but >> this is about plplot internal code. What is the maximum number of >> levels back which we might need to pass an error code? A dozen or more >> probably. The idea is to avoid all that internal book keeping. >> Especially if 10 layers down a memory allocation fails. Every layer up >> from this needs to have code to deal with checking error codes for >> every function it calls and then free resources and propagate that >> error code. With the C++ model a throw does all that automatically. >> > > I was looking at consolidating all the error/warning messages routines into one or two functions and create a new file in the src directory. One of the motivations was to cleanup some of the mallocs and static char array allocations that are scattered throughout the code. I have some code that I have been using for awhile that I was going to offer. The other advantage is that it makes is easier to implement different translations. > >> Regarding efficiency. There would almost certainly be no change. The >> compiler would probably optimise away anything superfluous. >> >> As far as I know C++ is ubiquitous. The earliest C++ compilers >> actually rewrote the C++ to C and passed it to a C compiler! >> >> Compilation speed for C++ can be slower if there is a lot of use of >> Templates. Templates are functions or classes which can have a version >> for any variable type, so instead of writing >> int round( int var, int basis); >> float round( float var, float basis); >> double round( double var, doublebasis); >> ... >> >> you have >> template<class T> >> T round( T var, T basis); >> >> Then the user can call round with any type and it will work, providing >> the code is compilable when you replace T with your type or class. But >> there is a compiletime cost associated with the extra work required by >> the compiler to do this. >> >> Regarding name mangling we can use extern C to give C name mangling to >> the API, then the library, both static and dynamic as I understand it, >> behaves just like a C library. >> >> As far as using a C++ compiler is concerned I am afraid that is >> something I have never worried about as I only write C++ programs so I >> always link against C runtimes. But as Alan says we already use C++ >> code in PLPlot so this should already be taken care of. >> >> If we are not interested in moving to C++ then we still need a method >> to propagate errors up through the multiple layers of code in the >> PLPlot library and do so in a way which minimises the risk of memory >> or other resource leaks. Because I work in C++ I'm not necessarily the >> best person to know best practice for this. But here are some ideas. I >> would be very interested to hear comments and more suggestions. >> >> 1) We make every internal function in PLPlot return an error code and >> check these codes religiously at every call. Even simple functions >> would probably have to do this because in the future simple functions >> may grow to more complex ones and changing a function that previously >> returned a meaningful value to one which returns an error code is >> likely to be error prone so best get them all out of the way. >> >> 2) We include an error code in the PLStream struct. We then check this >> after every function call to check if it has changed. Again this just >> requires us to be religious about checking the current error code. >> >> 3)I just found out about setjmp and longjmp. These are C macros. As >> far as I can tell setjmp saves the state of the program stack and >> creates a jump point. Calling longjump jumps back to the jump point >> and restores the state. However, any memory malloc'd in the meantime >> will leak, unless we have some way to free it later. This might be >> conceivable by having an array of pointers in the PLStream to all >> memory allocated (maybe create a function plmalloc and plfree to deal >> with this?) which we can deal with at the end. Does anyone have any >> experience with setjmp and longjmp or any advice? I also don't know >> how it deals with re-entrancy or nesting of setjmp calls. Or how to >> deal with our C++ code calling our C code - does our C++ code only >> call the public API or does it have access to internal routines? >> > > I have used the setjmp/longjmp approach. You definitely want to minimize the mallocs and the "distance" you jump. I prefer to do big mallocs and then partition the chunk of memory as needed rather than calling malloc multiple times. That helps when unwinding to cleanup on an error condition. Wrapping malloc/free with a plmalloc/plfree approach can be useful if the intent is to help cleanup. I do not recommend trying to manage memory because it is hard job to do correctly. > >> 4) Any other suggestions? >> >> All these methods require us to have some strategy for dealing with >> freeing memory although it is often trivial, in some cases this can be >> complex and a strong candidate for future bugs and memory leaks. >> >> I'm really keen to hear people's thoughts on this. As I said I work in >> C++, so I don't know best practice in C for dealing with this, but we >> definitely need to make a call and have a definite strategy for this >> otherwise it will be a nightmare. >> >> Phil >> >> >>> On 25 May 2015 at 21:06, Alan W. Irwin <ir...@be...> wrote: >>>> On 2015-05-25 17:29-0000 Arjen Markus wrote: >>>> >>>> An issue related to the use of C++ that has not been raised yet, but >>> which surfaced recently in my comprehensive testing efforts is the >>> fact that linking a C++ program requires a C++-enabled linker. Thus >>> introducing C++ as the language in which PLplot is to be implemented >>> would complicate the use of static builds. That may not be the most >>> common option nowadays, but I think we need to take a conscious >>> decision: do we want to continue to support static builds or not? One >>> pro for static builds is that they make deployment, especially of >>> binary-only distributions much easier (and safer). >>> >>> Hi Arjen: >>> >>> Yes, I think we should keep supporting static builds which work >>> virtually perfectly now with our CMake-based build systems for the >>> build of PLplot from source and the build of the installed examples. >>> >>> I assume what you mean by a C++-enabled linker is that extra libraries >>> have to be linked in for that case. Our CMake build handles this >>> situation with ease both for the core build and installed examples >>> build. So static linking is already completely supported in that case. >>> >>> Of course, there is currently a limitation on our traditional >>> (Makefile + pkg-config) build for e.g., Fortran examples where >>> pkg-config does not automatically know the name/location of the >>> special C++ library that needs to be linked in for a given C++ >>> compiler so the user would have to add that link himself to the >>> pkg-config results to successfully build the Fortran examples using >>> our traditional build system for the installed examples. >>> >>> Currently this problem occurs if C++ code is included in libplplot >>> from our C++ device drivers, psttf, qt, and wxwidgets. But that is a >>> very common case (or should be) to have those device drivers enabled >>> so if we adopt C++ for the core library this limitation in our >>> traditional build will essentially just stay the same in most cases. >>> So I don't see this limitation of our traditional build system for the >>> installed examples as a big concern with switching our core library to >>> C++ in all cases. >>> >>> Don't get me wrong, I would like this limitation to be resolved so >>> that our traditional build of the installed examples works as well as >>> the CMake build of those. When discussing this with Andrew I >>> mentioned one possibility for implementing a fix for this issue, but >>> that is a lot of work which I am going to leave to others if they want >>> to make our Makefile+pkg-config approach as high quality as the >>> CMake-based one for building our installed examples. >>> >>> Alan >>> >>> __________________________ >>> Alan W. Irwin >>> >>> Astronomical research affiliation with Department of Physics and Astronomy, >>> University of Victoria (astrowww.phys.uvic.ca). >>> >>> Programming affiliations with the FreeEOS equation-of-state >>> implementation for stellar interiors (freeeos.sf.net); the Time >>> Ephemerides project (timeephem.sf.net); PLplot scientific plotting >>> software package (plplot.sf.net); the libLASi project >>> (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); >>> and the Linux Brochure Project (lbproject.sf.net). >>> __________________________ >>> >>> Linux-powered Science >>> __________________________ >>> >>> ------------------------------------------------------------------------------ >>> One dashboard for servers and applications across Physical-Virtual-Cloud >>> Widest out-of-the-box monitoring support with 50+ applications >>> Performance metrics, stats and reports that give you Actionable Insights >>> Deep dive visibility with transaction tracing using APM Insight. >>> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >>> _______________________________________________ >>> Plplot-devel mailing list >>> Plp...@li... >>> https://lists.sourceforge.net/lists/listinfo/plplot-devel >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Plplot-devel mailing list >> Plp...@li... >> https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Phil R. <p.d...@gm...> - 2015-05-28 09:08:05
|
All that sounds good to me Alan Regarding the naming - well that is not so important to me as everything else and it sounds like there are options if we need them. Phil On 28 May 2015 at 00:33, Alan W. Irwin <ir...@be...> wrote: > On 2015-05-26 22:13+0100 Phil Rosenberg wrote: > >> On 24 May 2015 at 23:01, Alan W. Irwin <ir...@be...> wrote: >>> >>> [...] I just skimmed an interesting paper called "Fighting >>> regressions with git bisect" by Christian Couder which I highly >>> recommend to others here. (You can probably find it with a google >>> search, but for Debian wheezy it appears as >>> <file:///usr/share/doc/git/html/git-bisect-lk2009.html>.) > > >> Yeah I used git bisect a few times and it was useful. I haven't >> thought about how it would work with merges, but I can imagine how >> life would be easier with a linear workflow. Although presumably with >> merges, git bisect would simply fork down each branch? > > > Read that paper to see what happens for complicated history topology. > >> As I said with just me using 3 computers I could not get a >> system that worked. With more of us I can guarantee it won't. Don't >> forget also that there are likely to be a number of developments that >> aren't quite ready in time for release so even a final rebase will >> stuff up all of these. > > > See below. >> >> >>> >>>> >>>> So perhaps some general points now >>>> >>>> In the last development cycle I tried to work simultaneously on my >>>> Windows machine and two Linux machines to test my personal branches on >>>> all three. The rebase only workflow made this essentially impossible. >>>> I continually ended up in situations where I broke my repos because I >>>> rebased some work that existed in another repo and this caused massive >>>> issues. This was one person with three checked out versions of the >>>> code and it was a nightmare. If we have more than one person then I >>>> can guarantee we will break people's repos with almost every rebase. >>> >>> >>> >>> I think this is an illustration of the point I made previously in this >>> thread that pushing between multiple servers is virtually a guarantee >>> of merge commits which are prohibited under the rebase-only workflow. >>> >>> However, I think you could have made the above work quite simply as >>> follows (or at least I would like to see you try this method next >>> time): >>> >>> 1. Always keep master on each server exactly the same as master on >>> our official SF server. >>> >>> 2. Always rebase your topic branch on each server on that (common) >>> master branch. >> >> >> This doesn't work. Lets say I have my topic branch on PC1 and PC2. If >> I rebase my topic branch on PC1 then history gets rewritten and all my >> commits get a new guid. > > > I am not going to argue with you further here except that with care I > am positive the "git format-patch"/"git am" method will work, but I > have had a further thought that makes that argument irrelevant. > > You do have to be careful synchronizing local and remote public > branches which appears to me to be exactly what you were trying to do > amongst your various git servers. Think of all the extra steps we go > through to do that correctly for the master branch according to our > current workflow. So to do the same thing for a public topic branch I > think you have to go through exactly the same procedures, i.e., work > on a local private topic branch for plplot6 called private_plplot6, > git fetch to update the public branch origin/plplot6, merge --ff-only > that with local plplot6, rebase private_plplot6 on plplot6, merge > --ff-only private_plplot6 onto plplot6, then push plplot6 back to get > all your changes in local plplot6 changes published to the rest of > your servers. But to make this all work, it is clear you could not > rebase plplot6 on (say) the ongoing PLplot 5 development occurring in > master until there was absolutely no work left on plplot6. You have > made this point before about rebasing public topic branches so it > appears I have arrived at the same conclusion by a different route. > Note, however, that the above steps do use rebase properly and are > consistent with our rebase workflow! > >> [...]So I think I agree with most stuff up to here apart from rebasing >> during 6.0 development and maybe what happens at the "end". See below >> >>> If we follow that model it should be possible in theory to go smoothly >>> from the last PLplot 5.x release (where x < 99) to the release of >>> 6.0.0 with no overlap in support between PLplot 6 and 5, but we should >>> definitely tag the last PLplot 5 commit (as I stated in the details >>> above), and subsequently (if needed) use that tag as the origin of a >>> semi-permanent public plplot5 branch if we need to do further PLplot >>> 5.x bug-fix releases. Obviously in this case the plplot5 branch would >>> never be rebased or merged with the master branch since PLplot 5 would >>> be a dead-end branch of development (only devoted to minimal bug >>> fixing) by design. >> >> >> So I am now rather out of my depth here. this should definitely be >> considered discussive opinion and I would love feedback on this. Many >> projects seem to have some form of bugfix support for old versions or >> at the very least the option to install older versions. For example I >> canstile go to my package manager and install GTK2 and wxWidgets 2.8, >> despite the fact that GTK3 and wxWidgets 3.0 exist. The reasons are >> clear. If someone has code that uses an old API then they may not want >> to rewrite for the new API, or if someone has written another package >> which depends on plplot then we change our API and the version that >> exists in the repos then we break all those builds. I can therefore >> see a big advantage to simply leaving plplot5 frozen, and create a >> "new" library/package called plplot6 which moved forward with the api. > > > Agreed (since I have mentioned this same idea before.) We do not have > the developer resources to work on or support two different major > versions of PLplot. So the idea would be the first priority of PLplot > 6 would be to finish the backwards-incompatible stuff and pass all > tests as well as PLplot 5 currently does without introducing many (if > any) new features. Once such an initial release of PLplot 6 was made, > then we could go on from there adding new features as desired. And > PLplot 5 development would be severely limited to just fixing the most > egregious bugs. > >> That way apt-get install plplot still gives me a package compatible >> with the other binaries as expected and when ready a user can choose >> to upgrade to plplot6. As jim mentioned in an email. PLPlot 5 might be >> around for some time to come. > > > Yes, as the current PLplot-5.11.0, imminent 5.11.1 (and future? > 5.11.2) tarball releases which the PLplot 5 packagers could use for > years to come. > >> [...] If at the end we >> do want to merge plplot 5 and plplot 6, then I think that is exactly >> what we should do. In this one single instance to save us a lot of >> pain I think we should do a merge. I think that one single merge only >> at the changeover of major versions would be entirely acceptable even >> to most rebase advocates and I am sure this would be better than a >> rebase of a public branch. > > > As you might expect I am completely against breaking our workflow > unless and until we decide to permanently adopt another workflow model. > > That said, I think we are now agreed that once PLplot 6 development > starts, PLplot 5 development will continue on master and PLplot 6 > development will be done on a public branch plplot6 with no rebasing > of plplot6 on master. (Instead, fixes on master will have to be > propagated to plplot6 in other ways such as "git format-patch"/"git > am".) So if enthusiasm for PLplot 6 wanesm there are several false > starts, or the final product doesn't work very well, our ongoing > development of PLplot 5 will be unaffected. > > So really the only point of contention left is _if_ PLplot 6 development > is a complete success, how to reach the end goal of > ongoing PLplot 6 development and releases occurring on a branch called > master and ongoing PLplot 5 development (if there is any) should occur > on a branch called plplot5? One idea that I have mentioned before > which might work is to simply rename the two branches. See > <http://stackoverflow.com/questions/4753888/git-renaming-branches-remotely> > for further discussion. There was no mention there of problems for > other users (other than to rename their local branches consistently). > For example, if there was some chance of other users losing their > work, I think it would have been mentioned. > > However, at this stage I don't think we have enough information to > make a final decision now about how to implement if/when PLplot 6 is a > success. In other words, let's cross that bridge when we come to it > and not argue about it now. After all, PLplot 6 right now is a dream > in everyone's mind, but the fundamental decisions about how that > should be implemented (i.e., C++ based or C based?) have not even been > agreed on yet. And our group experience with the current workflow is > still painfully inadequate in my mind, and we will have a lot more > experience with it (for example, I hope you will be quite experienced > and comfortable with it for your 3 servers) by the time that we are > ready to release PLplot 6. > > > Alan > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state > implementation for stellar interiors (freeeos.sf.net); the Time > Ephemerides project (timeephem.sf.net); PLplot scientific plotting > software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); > and the Linux Brochure Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ |
From: Arjen M. <Arj...@de...> - 2015-05-28 07:52:19
|
Hi Alan, > -----Original Message----- > From: Alan W. Irwin [mailto:ir...@be...] > Hi Arjen: > > I request two changes in the results: > > 1. Please use the --stdout option to git format-patch. That allows you to store the > result in one file rather than a series of them. > Clear, I was a bit surprised seeing these two files myself. > 2. Base your topic branch on work that is available to me from the SF repository. > The reason I mention this is the files you deleted were not clean. They had some > local changes you had made before deleting them so that applying the patches here > failed. Here is an example, where I edited your deleted file patch to remove all the > starting "-" characters so that it should have been identical with the file to be deleted. > Instead, I got this diff between the two results: > While these changes are irrelevant, that is indeed annoying. I will have to take more care with this. Regards, Arjen DISCLAIMER: This message is intended exclusively for the addressee(s) and may contain confidential and privileged information. If you are not the intended recipient please notify the sender immediately and destroy this message. Unauthorized use, disclosure or copying of this message is strictly prohibited. The foundation 'Stichting Deltares', which has its seat at Delft, The Netherlands, Commercial Registration Number 41146461, is not liable in any way whatsoever for consequences and/or damages resulting from the improper, incomplete and untimely dispatch, receipt and/or content of this e-mail. |
From: Arjen M. <Arj...@de...> - 2015-05-28 07:49:07
|
Hi Alan, That sounds interesting indeed - as for exotic C++ compilers: I guess Cmake would have difficulty finding them in the first place. Main thing is that knowledge of all these libraries is easily accessible. Regards, Arjen > -----Original Message----- > From: Alan W. Irwin [mailto:ir...@be...] > > There was a question about CMake and C++ this morning on the CMake list where > the reply contained this very useful bit of information: > ... > So it appears to me that CMake has collected all of its knowledge about the special > libraries associated with any given C++ compiler in this file DISCLAIMER: This message is intended exclusively for the addressee(s) and may contain confidential and privileged information. If you are not the intended recipient please notify the sender immediately and destroy this message. Unauthorized use, disclosure or copying of this message is strictly prohibited. The foundation 'Stichting Deltares', which has its seat at Delft, The Netherlands, Commercial Registration Number 41146461, is not liable in any way whatsoever for consequences and/or damages resulting from the improper, incomplete and untimely dispatch, receipt and/or content of this e-mail. |
From: Alan W. I. <ir...@be...> - 2015-05-27 23:33:09
|
On 2015-05-26 22:13+0100 Phil Rosenberg wrote: > On 24 May 2015 at 23:01, Alan W. Irwin <ir...@be...> wrote: >> [...] I just skimmed an interesting paper called "Fighting >> regressions with git bisect" by Christian Couder which I highly >> recommend to others here. (You can probably find it with a google >> search, but for Debian wheezy it appears as >> <file:///usr/share/doc/git/html/git-bisect-lk2009.html>.) > Yeah I used git bisect a few times and it was useful. I haven't > thought about how it would work with merges, but I can imagine how > life would be easier with a linear workflow. Although presumably with > merges, git bisect would simply fork down each branch? Read that paper to see what happens for complicated history topology. > As I said with just me using 3 computers I could not get a > system that worked. With more of us I can guarantee it won't. Don't > forget also that there are likely to be a number of developments that > aren't quite ready in time for release so even a final rebase will > stuff up all of these. See below. > >> >>> >>> So perhaps some general points now >>> >>> In the last development cycle I tried to work simultaneously on my >>> Windows machine and two Linux machines to test my personal branches on >>> all three. The rebase only workflow made this essentially impossible. >>> I continually ended up in situations where I broke my repos because I >>> rebased some work that existed in another repo and this caused massive >>> issues. This was one person with three checked out versions of the >>> code and it was a nightmare. If we have more than one person then I >>> can guarantee we will break people's repos with almost every rebase. >> >> >> I think this is an illustration of the point I made previously in this >> thread that pushing between multiple servers is virtually a guarantee >> of merge commits which are prohibited under the rebase-only workflow. >> >> However, I think you could have made the above work quite simply as >> follows (or at least I would like to see you try this method next >> time): >> >> 1. Always keep master on each server exactly the same as master on >> our official SF server. >> >> 2. Always rebase your topic branch on each server on that (common) >> master branch. > > This doesn't work. Lets say I have my topic branch on PC1 and PC2. If > I rebase my topic branch on PC1 then history gets rewritten and all my > commits get a new guid. I am not going to argue with you further here except that with care I am positive the "git format-patch"/"git am" method will work, but I have had a further thought that makes that argument irrelevant. You do have to be careful synchronizing local and remote public branches which appears to me to be exactly what you were trying to do amongst your various git servers. Think of all the extra steps we go through to do that correctly for the master branch according to our current workflow. So to do the same thing for a public topic branch I think you have to go through exactly the same procedures, i.e., work on a local private topic branch for plplot6 called private_plplot6, git fetch to update the public branch origin/plplot6, merge --ff-only that with local plplot6, rebase private_plplot6 on plplot6, merge --ff-only private_plplot6 onto plplot6, then push plplot6 back to get all your changes in local plplot6 changes published to the rest of your servers. But to make this all work, it is clear you could not rebase plplot6 on (say) the ongoing PLplot 5 development occurring in master until there was absolutely no work left on plplot6. You have made this point before about rebasing public topic branches so it appears I have arrived at the same conclusion by a different route. Note, however, that the above steps do use rebase properly and are consistent with our rebase workflow! > [...]So I think I agree with most stuff up to here apart from rebasing > during 6.0 development and maybe what happens at the "end". See below > >> If we follow that model it should be possible in theory to go smoothly >> from the last PLplot 5.x release (where x < 99) to the release of >> 6.0.0 with no overlap in support between PLplot 6 and 5, but we should >> definitely tag the last PLplot 5 commit (as I stated in the details >> above), and subsequently (if needed) use that tag as the origin of a >> semi-permanent public plplot5 branch if we need to do further PLplot >> 5.x bug-fix releases. Obviously in this case the plplot5 branch would >> never be rebased or merged with the master branch since PLplot 5 would >> be a dead-end branch of development (only devoted to minimal bug >> fixing) by design. > > So I am now rather out of my depth here. this should definitely be > considered discussive opinion and I would love feedback on this. Many > projects seem to have some form of bugfix support for old versions or > at the very least the option to install older versions. For example I > canstile go to my package manager and install GTK2 and wxWidgets 2.8, > despite the fact that GTK3 and wxWidgets 3.0 exist. The reasons are > clear. If someone has code that uses an old API then they may not want > to rewrite for the new API, or if someone has written another package > which depends on plplot then we change our API and the version that > exists in the repos then we break all those builds. I can therefore > see a big advantage to simply leaving plplot5 frozen, and create a > "new" library/package called plplot6 which moved forward with the api. Agreed (since I have mentioned this same idea before.) We do not have the developer resources to work on or support two different major versions of PLplot. So the idea would be the first priority of PLplot 6 would be to finish the backwards-incompatible stuff and pass all tests as well as PLplot 5 currently does without introducing many (if any) new features. Once such an initial release of PLplot 6 was made, then we could go on from there adding new features as desired. And PLplot 5 development would be severely limited to just fixing the most egregious bugs. > That way apt-get install plplot still gives me a package compatible > with the other binaries as expected and when ready a user can choose > to upgrade to plplot6. As jim mentioned in an email. PLPlot 5 might be > around for some time to come. Yes, as the current PLplot-5.11.0, imminent 5.11.1 (and future? 5.11.2) tarball releases which the PLplot 5 packagers could use for years to come. > [...] If at the end we > do want to merge plplot 5 and plplot 6, then I think that is exactly > what we should do. In this one single instance to save us a lot of > pain I think we should do a merge. I think that one single merge only > at the changeover of major versions would be entirely acceptable even > to most rebase advocates and I am sure this would be better than a > rebase of a public branch. As you might expect I am completely against breaking our workflow unless and until we decide to permanently adopt another workflow model. That said, I think we are now agreed that once PLplot 6 development starts, PLplot 5 development will continue on master and PLplot 6 development will be done on a public branch plplot6 with no rebasing of plplot6 on master. (Instead, fixes on master will have to be propagated to plplot6 in other ways such as "git format-patch"/"git am".) So if enthusiasm for PLplot 6 wanesm there are several false starts, or the final product doesn't work very well, our ongoing development of PLplot 5 will be unaffected. So really the only point of contention left is _if_ PLplot 6 development is a complete success, how to reach the end goal of ongoing PLplot 6 development and releases occurring on a branch called master and ongoing PLplot 5 development (if there is any) should occur on a branch called plplot5? One idea that I have mentioned before which might work is to simply rename the two branches. See <http://stackoverflow.com/questions/4753888/git-renaming-branches-remotely> for further discussion. There was no mention there of problems for other users (other than to rename their local branches consistently). For example, if there was some chance of other users losing their work, I think it would have been mentioned. However, at this stage I don't think we have enough information to make a final decision now about how to implement if/when PLplot 6 is a success. In other words, let's cross that bridge when we come to it and not argue about it now. After all, PLplot 6 right now is a dream in everyone's mind, but the fundamental decisions about how that should be implemented (i.e., C++ based or C based?) have not even been agreed on yet. And our group experience with the current workflow is still painfully inadequate in my mind, and we will have a lot more experience with it (for example, I hope you will be quite experienced and comfortable with it for your 3 servers) by the time that we are ready to release PLplot 6. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |
From: Alan W. I. <ir...@be...> - 2015-05-27 21:50:54
|
On 2015-05-25 13:06-0700 Alan W. Irwin wrote: > Don't get me wrong, I would like this limitation to be resolved so > that our traditional build of the installed examples works as well as > the CMake build of those. When discussing this with Andrew I > mentioned one possibility for implementing a fix for this issue, but > that is a lot of work which I am going to leave to others if they want > to make our Makefile+pkg-config approach as high quality as the > CMake-based one for building our installed examples. There was a question about CMake and C++ this morning on the CMake list where the reply contained this very useful bit of information: "Look at CMakeFiles/${CMAKE_VERSION}/CMake*Compiler.cmake for details of the compiler detected for each language. Look for these variables: CMAKE_C_COMPILER CMAKE_C_IMPLICIT_LINK_LIBRARIES CMAKE_CXX_COMPILER CMAKE_CXX_IMPLICIT_LINK_LIBRARIES " So I did that for my latest build and found software@raven> grep CMAKE_CXX_IMPLICIT_LINK_ CMakeFiles/3.0.2/CMakeCXXCompiler.cmake set(CMAKE_CXX_IMPLICIT_LINK_LIBRARIES "stdc++;m;c") set(CMAKE_CXX_IMPLICIT_LINK_DIRECTORIES "/usr/lib/gcc/x86_64-linux-gnu/4.7;/usr/lib/x86_64-linux-gnu;/usr/lib;/lib/x86_64-linux-gnu;/lib") set(CMAKE_CXX_IMPLICIT_LINK_FRAMEWORK_DIRECTORIES "") So it appears to me that CMake has collected all of its knowledge about the special libraries associated with any given C++ compiler in this file. Therefore, I plan to remove the current restriction on our traditional build system for the installed examples by finding and parsing this file for the values of CMAKE_CXX_IMPLICIT_LINK_LIBRARIES and CMAKE_CXX_IMPLICIT_LINK_DIRECTORIES; doing the appropriate find_library commands to find exact special library locations, and transforming this information into the -L and -l format required by our configured pkg-config files. Anyhow, once I have implemented this change there should be no further traditional build system concern over linking issues with C++ either for PLplot 5 or PLplot 6 unless the user is using some exotic C++ compiler that CMake has never heard of. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |
From: Orion P. <or...@co...> - 2015-05-27 18:23:12
|
On 05/18/2015 06:41 PM, Alan W. Irwin wrote: > On 2015-05-18 16:53-0600 Orion Poplawski wrote: > >> On 05/18/2015 01:35 AM, Alan W. Irwin wrote: > [...] >>> On 2015-04-25 12:42-0600 Orion Poplawski wrote: >>> >>>> On 04/25/2015 11:05 AM, Alan W. Irwin wrote: >>>>> While we are at it, are there any other general issues like this one >>>>> (i.e., issues likely to affect all distros) addressed by the Fedora >>>>> downstream patches which we should be aware of upstream? >>>> >>>> There are two other issues currently addressed downstream, which I'm >>>> pretty >>>> sure I've raised here before. The ocaml one relatively recently, >>>> not sure >>>> about the multiarch one. >>>> >>>> Patch2: plplot-multiarch.patch >>>> >>>> This allows for the "core" plplot package to be "multiarch" - >>>> exactly the >>>> same content for 32-bit and 64-bit builds. Otherwise the >>>> PKG_CONFIG_ENV and >>>> RPATH variables have /usr/lib or /usr/lib64 in them. I know this patch >>>> isn't acceptable upstream as it is, but if you found a way to >>>> address it, >>>> that would be great. >>> >>> Patch2a: PKG_CONFIG_ENV >>> >>> I have now (commit id 2b4e397) implemented a user-configurable >>> location called >>> CMAKE_INSTALL_PKG_CONFIG_DIR where the PLplot *.pc files are >>> installed. The default value for this cached variable is >>> >>> $prefix/share/pkgconfig >>> >>> which is apparently what Debian wheezy uses for multiarch *.pc files. >>> >>> @Andrew: can you confirm that location for modern Debian? >>> >>> @Orion: If that default location is not right for the Fedora multiarch >>> needs, try setting CMAKE_INSTALL_PKG_CONFIG_DIR on the cmake command >>> line. >> >> *If* the pkgconfig files were "multiarch"/noarch, that would be the >> place to >> install them. However, they are not noarch - they contain /usr/lib or >> /usr/lib64 depending on the architecture: >> > [...] >> plplot.pc:libdir=/usr/lib64 >> plplot.pc:drvdir=/usr/lib64/plplot5.11.0/drivers >> plplot.pc:Libs.private: -L"${libdir}" -L"/usr/lib64" -lltdl >> -L"/usr/lib64" -lm >> -L"/usr/lib64" -lshp -L"/usr/lib64" -lfreetype -lcsirocsa -lcsironn >> -lqhull >> -lqsastime > [...] > > OK. Thanks for reminding me that these *.pc files are arch-dependent. > I don't know what I was thinking. > > But it also appears that Fedora and Debian disagree here concerning > library install locations; Fedora uses > /usr/lib or /usr/lib64 while Debian uses > /usr/lib/i386-linux-gnu or /usr/lib/x86_64-linux-gnu. > > Currently, our CMake build system model sets the default library > location with (in cmake/modules/instdirs/cmake) > > set( > CMAKE_INSTALL_LIBDIR > ${CMAKE_INSTALL_EXEC_PREFIX}/lib > CACHE PATH "install location for object code libraries" > ) > > Since that default satisfies neither Andrew's or your needs I assume you > both > have to override CMAKE_INSTALL_LIBDIR. In fact, Andrew (from > debian/rules) overrides it with > -DCMAKE_INSTALL_LIBDIR=/usr/lib/$(DEB_HOST_MULTIARCH) > and I assume you have to do something similar in your spec file. Yup: -DCMAKE_INSTALL_LIBDIR:PATH=%{_libdir} \ > Therefore, to make life more convenient for you both I have used > > set( > CMAKE_INSTALL_PKG_CONFIG_DIR > ${CMAKE_INSTALL_LIBDIR}/pkgconfig > CACHE PATH "install location for pkg-config *.pc files" > ) > > for the pkg-config default install directory > in my latest commit (3062c3b). Which implies if you override > CMAKE_INSTALL_LIBDIR, then CMAKE_INSTALL_PKG_CONFIG_DIR automatically > gets overridden as well so you don't have to override it separately. So > please try again with that commit to see if it satisfies your > needs. Interestingly (perhaps), the pkgconfig files were always installed properly before without configuring - perhaps because it used pkg-config to determine the location? I assume that's what should be done. > >> In the Makefiles I get: >> > > [...] >> ocaml/Makefile:RPATHCMD = -ccopt '' >> >> Is that right? > > Well, I double checked and that is indeed the expected result if > -DUSE_RPATH=OFF. However, I don't comprehensively build- or run-test > -DUSE_RPATH=OFF here (I have enough such testing on my plate already > with default -DUSE_RPATH=ON), and instead leave such testing to > distribution packagers who really do need -DUSE_RPATH=OFF. So really > the definitive test of that result is do the ocaml examples build and > run fine for you with that null string -ccopt option for the > traditional build of installed examples or does the OCaml compiler > choke on that null string? Run "make noninteractive >& > noninteractive.out" in $prefix/share/plplot5.11.0/examples to find > out. > > Alan > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state > implementation for stellar interiors (freeeos.sf.net); the Time > Ephemerides project (timeephem.sf.net); PLplot scientific plotting > software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); > and the Linux Brochure Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ -- Orion Poplawski Technical Manager 303-415-9701 x222 NWRA/CoRA Division FAX: 303-415-9702 3380 Mitchell Lane or...@co... Boulder, CO 80301 http://www.cora.nwra.com |
From: Phil R. <p.d...@gm...> - 2015-05-26 21:17:20
|
Hi Alan I think this one had escaped my to do list. It is now back on and I will let you know. Phil On 26 May 2015 at 21:52, Jim Dishaw <ji...@di...> wrote: > I believe this bug is due to extra EOP (or BOP I can't remember now) call that I mentioned in email several months ago. I can't search the mailing list right now, but I can do it later if needed. > > > >> On May 26, 2015, at 4:35 PM, Alan W. Irwin <ir...@be...> wrote: >> >> Hi Phil: >> >> A known bug for the new wxwidgets device is that the -np option >> (which should automatically display all the pages of the example and >> exit without any user intervention) does not work. >> >> This bug especially impacts interactive testing since manually >> clicking on the wxPLViewer GUI to see all the pages and exit the GUI >> for each tested example gets really old really fast. It's for this >> reason of convenience I have recently (commit id 818b93a) temporarily >> excluded testing wxwidgets as part of the test_interactive targets of >> our three different build systems. >> >> When you do fix -np for the wxwidgets device please give me a heads up >> so I can include tests of wxwidgets again in the test_interactive >> targets. >> >> Alan >> __________________________ >> Alan W. Irwin >> >> Astronomical research affiliation with Department of Physics and Astronomy, >> University of Victoria (astrowww.phys.uvic.ca). >> >> Programming affiliations with the FreeEOS equation-of-state >> implementation for stellar interiors (freeeos.sf.net); the Time >> Ephemerides project (timeephem.sf.net); PLplot scientific plotting >> software package (plplot.sf.net); the libLASi project >> (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); >> and the Linux Brochure Project (lbproject.sf.net). >> __________________________ >> >> Linux-powered Science >> __________________________ >> >> ------------------------------------------------------------------------------ >> _______________________________________________ >> Plplot-devel mailing list >> Plp...@li... >> https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Phil R. <p.d...@gm...> - 2015-05-26 21:13:42
|
On 24 May 2015 at 23:01, Alan W. Irwin <ir...@be...> wrote: > Hi Phil: > > This is long, but you have given me lots to respond to. :-) > > On 2015-05-24 09:37+0100 Phil Rosenberg wrote: > >> Hi Alan and Dave >> >> Some specific comments first, then some general ones after. >> >>> Fundamentally, the git world is split on the rebase-only >>> versus merge-only question >> >> As it happens I fall in the merge camp. But for the work we have been >> doing up to now I don't feel there has been much difference either >> way.But this is personal and I know you feel strongly about rebase >> only so I have no intention to push you on this. > > > Just to explain further why I am being so conservative about this.... > The advice I got from Brad King (who has experience advising a large > number of software projects on the svn to git transition) is stick > with our current rebase-only workflow until all developers were up to > speed with git. I would argue we are not there yet since some of the > PLplot developers who were active in the svn era have not contributed > a single commit yet in the git era. Some of those might just be > missing in action for other reasons, but I know of at least one case > where intimidation concerning git has played a significant role in > delaying participation for at least a while, but I am hoping he will > overcome his fears and become an active developer for PLplot again. > So this is definitely not a good time to start fooling with the > workflow. Yes I can understand this. I would also like C++ to be as inclusive as it can be. I didn't really know SVN when we started to I had nothing to "convert" from. > > Once we do get to the stage of being up to speed with git as a > development team, Brad went on to argue that moving to a merge-only > workflow that preserved a clean first-parent shape of history was not > for the faint-hearted and would require a merge czar (his current role > in CMake development) to fight through all the complicated merge > issues for the master branch with that merge czar being essentially > the only one in control of the master branch. So the choices for the > merge-only model seem to be to either have a merge czar or abandon all > workflow rules which would mean that the history DAG was extremely > chaotic. > > I don't like either of those choices. > > I think none of us, including me, are qualified to be the merge czar, > and in any case I think that is bad politics for such a small > development community to have just one or two gatekeepers for the > master branch. In my view it is much better for all our active > developers to feel responsible for the quality of PLplot including our > git history, and the rule for enforcing rebase-only workflow (no merge > commits allowed) is in principal a lot easier to understand than the > much more complicated rule required to have a good first-parent shape > for the history. I agree with this, in that I would not want to have a single merge czar > > I think providing a meaningful history is really important for such > development work as git bisection to find regressions. To expand on > that concern, I just skimmed an interesting paper called "Fighting > regressions with git bisect" by Christian Couder which I highly > recommend to others here. (You can probably find it with a google > search, but for Debian wheezy it appears as > <file:///usr/share/doc/git/html/git-bisect-lk2009.html>.) The > principal conclusion I drew from this paper was git bisection results > are more reliable the simpler the history. And git bisection really > is a killer app that I and others used quite a bit in the last release > cycle to find regressions, and I would personally hate to compromise > that killer app by allowing our history to be chaotic due to an > uncontrolled merge workflow. Yeah I used git bisect a few times and it was useful. I haven't thought about how it would work with merges, but I can imagine how life would be easier with a linear workflow. Although presumably with merges, git bisect would simply fork down each branch? > >>> However, we could allow for deliberate rebasing (e.g., to >>> propagate a must-have fix from master to throwaway_plplot6), but that >>> would have to be scheduled a couple of days in advance >> >> I feel very strongly against this. If somebody misses the deadline >> because they are off email or their work isn't is a state to commit >> (i.e. it would break the build) then we could easily lose large chunks >> of work that somebody has created. In my opinion we absolutely must >> not rebase a branch we are working on. Ever. > > > I think that restriction is too strong. Instead, in response to your > concern, I think we could establish the rule that the guy who is > proposing the rebase could simply wait for a positive OK from > everybody who is actively working on the public topic branch (which > might mean in practice the rebase only occurs right at the end when > the topic has matured, see below). My feeling is that your in-practice prediction would be true. However I do strongly believe that any rebase of a branch we are all using will kill us. Referring back to someone who might not be comfortable with git trying to manage destroying their entire history would be a nightmare. As I said with just me using 3 computers I could not get a system that worked. With more of us I can guarantee it won't. Don't forget also that there are likely to be a number of developments that aren't quite ready in time for release so even a final rebase will stuff up all of these. > >> >> So perhaps some general points now >> >> In the last development cycle I tried to work simultaneously on my >> Windows machine and two Linux machines to test my personal branches on >> all three. The rebase only workflow made this essentially impossible. >> I continually ended up in situations where I broke my repos because I >> rebased some work that existed in another repo and this caused massive >> issues. This was one person with three checked out versions of the >> code and it was a nightmare. If we have more than one person then I >> can guarantee we will break people's repos with almost every rebase. > > > I think this is an illustration of the point I made previously in this > thread that pushing between multiple servers is virtually a guarantee > of merge commits which are prohibited under the rebase-only workflow. > > However, I think you could have made the above work quite simply as > follows (or at least I would like to see you try this method next > time): > > 1. Always keep master on each server exactly the same as master on > our official SF server. > > 2. Always rebase your topic branch on each server on that (common) > master branch. This doesn't work. Lets say I have my topic branch on PC1 and PC2. If I rebase my topic branch on PC1 then history gets rewritten and all my commits get a new guid. Now if I try to sync PC2 it sees that all the commits in the branch it was tracking have disappeared and there are a load of new ones there instead and basically the branch ends up broken. I might be misremembering the details, but I kept repeatedly tying myself in knots with this whatever strategy I tried. > > Those two rules mean every topic branch on each of your servers is > identical except for additional development you have made on that > topic branch on one of your servers. But then it should be trivial > using the "git format-patch"/"git am" method to update all your > servers' topic branches to be identical with the one where you have > done additional recent development, Note especially I have found the > --interactive option of "git am" and "git log --oneline" to be quite > effective in selecting the commits that will be applied from a series > generated by "git format-patch". Okay yes I could try to do things with patches. but then why are we bothering with git? > >> On question that might have an impact. Do we wish to continue >> supporting PLplot 5 for some time with bug fixes? It might be that >> some users have legacy software that relies on v5 API. So maybe we >> should consider having permanently separate v5 and v6 branches? I'm >> not sure what this does to our development model. > > > See below for an answer to this question. > >> >> I'm not sure this is right, but I would assume that if we apply a bug >> fix to the v5 branch then create a patch of this commit and apply that >> to the v6 branch then if we ever merge (or rebase) the branches then >> git is clever enough to not create a conflict. Is this correct? > > > I don't think we should limit how we develop on throwaway-plplot6 by > trying to avoid in advance rebase conflict issues. So using patches > from master to throwaway-plplot6 or rebasing (if you can get complete > agreement to that step for all active developers at the time) should > be fine. Of course, when we do our final rebase before the merge (see > below), we will just have to deal with conflicts the way that is > described in "git help rebase". > >> >> So in my opinion we have limited options (in no particular order) >> 1)We just don't run a parallel v6 branch. >> 2)We run a parallel branch permanently and if we have commits we wish >> to apply to both v5 and v6 we do so with a patch >> 3)We run a parallel branch permanently and if we have commits we wish >> to apply to both v5 and v6 we do a rebase (I think this would be very >> bad!!!) >> 4)We move to a merge workflow >> 5)We hide our v6 branch so we only break out own when we rebase only >> once when v6 is ready (already discounted by Alan) >> >> Out of all those perhaps the idea of having a v5 and v6 branch that we >> actually never merge together, and use patches to commit to both gives >> use the advantage of parallel branches and also rebase workflow? >> > > That last is pretty close to one of the two options I proposed so I > think we are quite close to consensus here. In my proposal the names > of the two public branches would be throwaway-plplot6 for PLplot 6 > development and master for PLplot 5 development. And even if you are > uncomfortable with the rebase method I proposed above to deal with the > developer who is temporarily out of e-mail contact, I think it is > important to rebase at least when throwaway-plplot6 has matured to > make sure all innovations and bug fixes that are on the master branch > that are relevant to PLplot 6 are continued when throwaway-plplot6 is > merged into master. > > To make that proposal more specific we should do the following once > throwaway-plplot6 has matured. > > 1. Tag the tip of the master branch (with a name like > plplot5-branchpoint for easy future reference). > > 2. Rebase throwaway-plplot6 with master (making sure that > everyone is aware of this so that nobody is left behind > by this change). > > 3. merge --ff-only throwaway-plplot6 onto master. > > 4. Delete throwaway-plplot6. > > In other words, we generally treat the public throwaway-plplot6 branch > the same as we ordinary treat a small private topic branch except that > the rebasing of throwaway-plplot6 will probably more limited due to > the developer coordination reasons discussed above. > > In sum, I think this is a good compromise git development proposal > that works well for our rebase-only workflow, and which also satisfies > your concerns with ease of collaboration for many developers working > on a private topic branch. > > Assuming you agree with this proposal and nobody else can find some > git issue with it, then I would like to flesh out how I visualize the > transition between PLplot 5 and PLplot 6. > > The maturation stages that PLplot 6 will go through are something > like the following: > > 1. Just beginning to work, i.e., parts of it work on one specific > platform. (I assume it would take you only a week or so to achieve > this with a C++ plplot core with decent error propagation and a > corresponding C wrapper.) > > 2. Partly working, i.e., most components work on most platforms we > have access to. We would want this to be an all-out effort > concentrating strictly on introducing all backwards-incompatible > changes we want for PLplot 6. That is, I hope this can be done in a > month or so rather than dragging it out for years with problems in > that case with feature creep that tends to be an issue for unreleased > software. > > 3. Mostly working, i.e., the comprehensive test script works for > everyone on all platforms we can access as well as it does for PLplot > 5. The amount of time and effort going into this part of the effort > will depend strongly on how much test platform coverage we already > have achieved for PLplot 5, but I am hoping we achieve large coverage > for both PLplot 5 very soon now and PLplot 6 when it is ready so that > users of either will have a smooth ride on most platforms. > > 4. Completely works for us, e.g., the comprehensive test script without > any special options (i.e., testing everything) works for all of us > on the various platforms we have access to. > > Note, I continue to strive for stage 4 with PLplot 5 which makes stage > 3 a moving target for PLplot 6. For example, with my help both in > selecting packages and with any further build system issues we run > into I hope Arjen will be able to achieve perfect comprehensive test > results in the next few weeks for a Cygwin platform where all possible > PLplot prerequisites have been installed. And once that goal is > achieved and with willingness of our developers to run the tests on > the various platforms accessible to them, it should be fairly > straightforward to achieve similar good results on MinGW-w64/MSYS2 (as > a close analog of the Cygwin success) and Mac OS X (as a fairly close > Unix analog of our Linux success). But, I am pretty sure the > equivalent comprehensive testing success on MSVC is going to be more > difficult to achieve (even for the limited prerequisites that are > available on that platform) so I expect the distinction between stages > 3 and 4 for PLplot 6 is going to be a real one for some time to come. > > Also note that PLplot 6 is going to be backwards-incompatible with > PLplot 5 so making that transition is going to be painful for our > users without much to gain from their prospective other than a > superior response to errors. So we want to reduce their pain as much > as possible by avoiding a release of PLplot 6 before it is ready. > Furthermore, we have limited manpower so we want to minimize the > length of time we have to support both PLplot 5 and PLplot 6, and the > best way to do that is do not release PLplot 6 until it is ready i.e., > it has _completely_ achieved stage 3 above. So what I have in mind is > only when stage 3 has been achieved do we do the above steps to merge > throwaway-plplot6 onto master and release PLplot-5.99.y releases (6.0 > release candidates) soon after for our users to evaluate followed by > the 6.0.0 release just as soon as we have no more user complaints > about those release candidates. > > Note, I am emphasizing speed of development here and reliability > rather than features. However, it sounds from what you have said that > propagating all error conditions will be trivial so it will be nice to > have that feature right away as (say) the sole initial selling point > of PLplot 6.0.0 to help pay for the user pain of the transition > from PLplot 5 to 6. > So I think I agree with most stuff up to here apart from rebasing during 6.0 development and maybe what happens at the "end". See below > If we follow that model it should be possible in theory to go smoothly > from the last PLplot 5.x release (where x < 99) to the release of > 6.0.0 with no overlap in support between PLplot 6 and 5, but we should > definitely tag the last PLplot 5 commit (as I stated in the details > above), and subsequently (if needed) use that tag as the origin of a > semi-permanent public plplot5 branch if we need to do further PLplot > 5.x bug-fix releases. Obviously in this case the plplot5 branch would > never be rebased or merged with the master branch since PLplot 5 would > be a dead-end branch of development (only devoted to minimal bug > fixing) by design. So I am now rather out of my depth here. this should definitely be considered discussive opinion and I would love feedback on this. Many projects seem to have some form of bugfix support for old versions or at the very least the option to install older versions. For example I canstile go to my package manager and install GTK2 and wxWidgets 2.8, despite the fact that GTK3 and wxWidgets 3.0 exist. The reasons are clear. If someone has code that uses an old API then they may not want to rewrite for the new API, or if someone has written another package which depends on plplot then we change our API and the version that exists in the repos then we break all those builds. I can therefore see a big advantage to simply leaving plplot5 frozen, and create a "new" library/package called plplot6 which moved forward with the api. That way apt-get install plplot still gives me a package compatible with the other binaries as expected and when ready a user can choose to upgrade to plplot6. As jim mentioned in an email. PLPlot 5 might be around for some time to come. Therefore to me, maybe it is worth never rebasing the v6 branch onto v5. This removes all argument about the best way to do it. However if we do want to make that merge then despite the fact that I have stated that I do not like the idea of a gatekeeper for merges, I feel this would be one situation where it was valid. If at the end we do want to merge plplot 5 and plplot 6, then I think that is exactly what we should do. In this one single instance to save us a lot of pain I think we should do a merge. I think that one single merge only at the changeover of major versions would be entirely acceptable even to most rebase advocates and I am sure this would be better than a rebase of a public branch > > Sorry this has been so long, but there is a lot to think about beyond > just the programming aspects when planning our move from PLplot 5 to > PLplot 6! > > > Alan > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state > implementation for stellar interiors (freeeos.sf.net); the Time > Ephemerides project (timeephem.sf.net); PLplot scientific plotting > software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); > and the Linux Brochure Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ |
From: J. L. M. <jl...@im...> - 2015-05-26 21:12:05
|
On 5/25/15 2:26 PM, Jim Dishaw wrote: > I just built from the current repository on a Mac (OS X 10.10.3) using > the steps from the documentation and did not get any errors. > > My build environment: > > CMake > version 3.2.2 from MacPorts > > CC > Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn) > Target: x86_64-apple-darwin14.3.0 > > Lewis, what is your build environment? Hi, Jim. Thanks for looking into this. My build environment was pkgsrc-2015Q1 with CMake 3.2.2 backported from pkgsrc-current, and, I can't remember, I may have had to backport one other package too. The compiler was Xcode 6.3.1. If it builds fine for you, then forget it. I'll try again when pkgsrc-2015Q2 is released and report back if I still have a problem then. Regards, Lewis |
From: Jim D. <ji...@di...> - 2015-05-26 20:52:54
|
I believe this bug is due to extra EOP (or BOP I can't remember now) call that I mentioned in email several months ago. I can't search the mailing list right now, but I can do it later if needed. > On May 26, 2015, at 4:35 PM, Alan W. Irwin <ir...@be...> wrote: > > Hi Phil: > > A known bug for the new wxwidgets device is that the -np option > (which should automatically display all the pages of the example and > exit without any user intervention) does not work. > > This bug especially impacts interactive testing since manually > clicking on the wxPLViewer GUI to see all the pages and exit the GUI > for each tested example gets really old really fast. It's for this > reason of convenience I have recently (commit id 818b93a) temporarily > excluded testing wxwidgets as part of the test_interactive targets of > our three different build systems. > > When you do fix -np for the wxwidgets device please give me a heads up > so I can include tests of wxwidgets again in the test_interactive > targets. > > Alan > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state > implementation for stellar interiors (freeeos.sf.net); the Time > Ephemerides project (timeephem.sf.net); PLplot scientific plotting > software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); > and the Linux Brochure Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ > > ------------------------------------------------------------------------------ > _______________________________________________ > Plplot-devel mailing list > Plp...@li... > https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Alan W. I. <ir...@be...> - 2015-05-26 20:35:40
|
Hi Phil: A known bug for the new wxwidgets device is that the -np option (which should automatically display all the pages of the example and exit without any user intervention) does not work. This bug especially impacts interactive testing since manually clicking on the wxPLViewer GUI to see all the pages and exit the GUI for each tested example gets really old really fast. It's for this reason of convenience I have recently (commit id 818b93a) temporarily excluded testing wxwidgets as part of the test_interactive targets of our three different build systems. When you do fix -np for the wxwidgets device please give me a heads up so I can include tests of wxwidgets again in the test_interactive targets. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |
From: Jim D. <ji...@di...> - 2015-05-26 20:17:23
|
> On May 26, 2015, at 3:53 PM, Phil Rosenberg <p.d...@gm...> wrote: > > Okay, well it was an option I thought I would put out there as I think > it was worth considering. I will try to answer the questions people > had anyway in case people are interested. > > Regarding domestic bindings, the C front end would remain. Although > the API would have to change if we wanted to return an error code. but > this is about plplot internal code. What is the maximum number of > levels back which we might need to pass an error code? A dozen or more > probably. The idea is to avoid all that internal book keeping. > Especially if 10 layers down a memory allocation fails. Every layer up > from this needs to have code to deal with checking error codes for > every function it calls and then free resources and propagate that > error code. With the C++ model a throw does all that automatically. > I was looking at consolidating all the error/warning messages routines into one or two functions and create a new file in the src directory. One of the motivations was to cleanup some of the mallocs and static char array allocations that are scattered throughout the code. I have some code that I have been using for awhile that I was going to offer. The other advantage is that it makes is easier to implement different translations. > Regarding efficiency. There would almost certainly be no change. The > compiler would probably optimise away anything superfluous. > > As far as I know C++ is ubiquitous. The earliest C++ compilers > actually rewrote the C++ to C and passed it to a C compiler! > > Compilation speed for C++ can be slower if there is a lot of use of > Templates. Templates are functions or classes which can have a version > for any variable type, so instead of writing > int round( int var, int basis); > float round( float var, float basis); > double round( double var, doublebasis); > ... > > you have > template<class T> > T round( T var, T basis); > > Then the user can call round with any type and it will work, providing > the code is compilable when you replace T with your type or class. But > there is a compiletime cost associated with the extra work required by > the compiler to do this. > > Regarding name mangling we can use extern C to give C name mangling to > the API, then the library, both static and dynamic as I understand it, > behaves just like a C library. > > As far as using a C++ compiler is concerned I am afraid that is > something I have never worried about as I only write C++ programs so I > always link against C runtimes. But as Alan says we already use C++ > code in PLPlot so this should already be taken care of. > > If we are not interested in moving to C++ then we still need a method > to propagate errors up through the multiple layers of code in the > PLPlot library and do so in a way which minimises the risk of memory > or other resource leaks. Because I work in C++ I'm not necessarily the > best person to know best practice for this. But here are some ideas. I > would be very interested to hear comments and more suggestions. > > 1) We make every internal function in PLPlot return an error code and > check these codes religiously at every call. Even simple functions > would probably have to do this because in the future simple functions > may grow to more complex ones and changing a function that previously > returned a meaningful value to one which returns an error code is > likely to be error prone so best get them all out of the way. > > 2) We include an error code in the PLStream struct. We then check this > after every function call to check if it has changed. Again this just > requires us to be religious about checking the current error code. > > 3)I just found out about setjmp and longjmp. These are C macros. As > far as I can tell setjmp saves the state of the program stack and > creates a jump point. Calling longjump jumps back to the jump point > and restores the state. However, any memory malloc'd in the meantime > will leak, unless we have some way to free it later. This might be > conceivable by having an array of pointers in the PLStream to all > memory allocated (maybe create a function plmalloc and plfree to deal > with this?) which we can deal with at the end. Does anyone have any > experience with setjmp and longjmp or any advice? I also don't know > how it deals with re-entrancy or nesting of setjmp calls. Or how to > deal with our C++ code calling our C code - does our C++ code only > call the public API or does it have access to internal routines? > I have used the setjmp/longjmp approach. You definitely want to minimize the mallocs and the "distance" you jump. I prefer to do big mallocs and then partition the chunk of memory as needed rather than calling malloc multiple times. That helps when unwinding to cleanup on an error condition. Wrapping malloc/free with a plmalloc/plfree approach can be useful if the intent is to help cleanup. I do not recommend trying to manage memory because it is hard job to do correctly. > 4) Any other suggestions? > > All these methods require us to have some strategy for dealing with > freeing memory although it is often trivial, in some cases this can be > complex and a strong candidate for future bugs and memory leaks. > > I'm really keen to hear people's thoughts on this. As I said I work in > C++, so I don't know best practice in C for dealing with this, but we > definitely need to make a call and have a definite strategy for this > otherwise it will be a nightmare. > > Phil > > >> On 25 May 2015 at 21:06, Alan W. Irwin <ir...@be...> wrote: >>> On 2015-05-25 17:29-0000 Arjen Markus wrote: >>> >>> An issue related to the use of C++ that has not been raised yet, but >> which surfaced recently in my comprehensive testing efforts is the >> fact that linking a C++ program requires a C++-enabled linker. Thus >> introducing C++ as the language in which PLplot is to be implemented >> would complicate the use of static builds. That may not be the most >> common option nowadays, but I think we need to take a conscious >> decision: do we want to continue to support static builds or not? One >> pro for static builds is that they make deployment, especially of >> binary-only distributions much easier (and safer). >> >> Hi Arjen: >> >> Yes, I think we should keep supporting static builds which work >> virtually perfectly now with our CMake-based build systems for the >> build of PLplot from source and the build of the installed examples. >> >> I assume what you mean by a C++-enabled linker is that extra libraries >> have to be linked in for that case. Our CMake build handles this >> situation with ease both for the core build and installed examples >> build. So static linking is already completely supported in that case. >> >> Of course, there is currently a limitation on our traditional >> (Makefile + pkg-config) build for e.g., Fortran examples where >> pkg-config does not automatically know the name/location of the >> special C++ library that needs to be linked in for a given C++ >> compiler so the user would have to add that link himself to the >> pkg-config results to successfully build the Fortran examples using >> our traditional build system for the installed examples. >> >> Currently this problem occurs if C++ code is included in libplplot >> from our C++ device drivers, psttf, qt, and wxwidgets. But that is a >> very common case (or should be) to have those device drivers enabled >> so if we adopt C++ for the core library this limitation in our >> traditional build will essentially just stay the same in most cases. >> So I don't see this limitation of our traditional build system for the >> installed examples as a big concern with switching our core library to >> C++ in all cases. >> >> Don't get me wrong, I would like this limitation to be resolved so >> that our traditional build of the installed examples works as well as >> the CMake build of those. When discussing this with Andrew I >> mentioned one possibility for implementing a fix for this issue, but >> that is a lot of work which I am going to leave to others if they want >> to make our Makefile+pkg-config approach as high quality as the >> CMake-based one for building our installed examples. >> >> Alan >> >> __________________________ >> Alan W. Irwin >> >> Astronomical research affiliation with Department of Physics and Astronomy, >> University of Victoria (astrowww.phys.uvic.ca). >> >> Programming affiliations with the FreeEOS equation-of-state >> implementation for stellar interiors (freeeos.sf.net); the Time >> Ephemerides project (timeephem.sf.net); PLplot scientific plotting >> software package (plplot.sf.net); the libLASi project >> (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); >> and the Linux Brochure Project (lbproject.sf.net). >> __________________________ >> >> Linux-powered Science >> __________________________ >> >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> _______________________________________________ >> Plplot-devel mailing list >> Plp...@li... >> https://lists.sourceforge.net/lists/listinfo/plplot-devel > > ------------------------------------------------------------------------------ > _______________________________________________ > Plplot-devel mailing list > Plp...@li... > https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Phil R. <p.d...@gm...> - 2015-05-26 19:53:49
|
Okay, well it was an option I thought I would put out there as I think it was worth considering. I will try to answer the questions people had anyway in case people are interested. Regarding domestic bindings, the C front end would remain. Although the API would have to change if we wanted to return an error code. but this is about plplot internal code. What is the maximum number of levels back which we might need to pass an error code? A dozen or more probably. The idea is to avoid all that internal book keeping. Especially if 10 layers down a memory allocation fails. Every layer up from this needs to have code to deal with checking error codes for every function it calls and then free resources and propagate that error code. With the C++ model a throw does all that automatically. Regarding efficiency. There would almost certainly be no change. The compiler would probably optimise away anything superfluous. As far as I know C++ is ubiquitous. The earliest C++ compilers actually rewrote the C++ to C and passed it to a C compiler! Compilation speed for C++ can be slower if there is a lot of use of Templates. Templates are functions or classes which can have a version for any variable type, so instead of writing int round( int var, int basis); float round( float var, float basis); double round( double var, doublebasis); ... you have template<class T> T round( T var, T basis); Then the user can call round with any type and it will work, providing the code is compilable when you replace T with your type or class. But there is a compiletime cost associated with the extra work required by the compiler to do this. Regarding name mangling we can use extern C to give C name mangling to the API, then the library, both static and dynamic as I understand it, behaves just like a C library. As far as using a C++ compiler is concerned I am afraid that is something I have never worried about as I only write C++ programs so I always link against C runtimes. But as Alan says we already use C++ code in PLPlot so this should already be taken care of. If we are not interested in moving to C++ then we still need a method to propagate errors up through the multiple layers of code in the PLPlot library and do so in a way which minimises the risk of memory or other resource leaks. Because I work in C++ I'm not necessarily the best person to know best practice for this. But here are some ideas. I would be very interested to hear comments and more suggestions. 1) We make every internal function in PLPlot return an error code and check these codes religiously at every call. Even simple functions would probably have to do this because in the future simple functions may grow to more complex ones and changing a function that previously returned a meaningful value to one which returns an error code is likely to be error prone so best get them all out of the way. 2) We include an error code in the PLStream struct. We then check this after every function call to check if it has changed. Again this just requires us to be religious about checking the current error code. 3)I just found out about setjmp and longjmp. These are C macros. As far as I can tell setjmp saves the state of the program stack and creates a jump point. Calling longjump jumps back to the jump point and restores the state. However, any memory malloc'd in the meantime will leak, unless we have some way to free it later. This might be conceivable by having an array of pointers in the PLStream to all memory allocated (maybe create a function plmalloc and plfree to deal with this?) which we can deal with at the end. Does anyone have any experience with setjmp and longjmp or any advice? I also don't know how it deals with re-entrancy or nesting of setjmp calls. Or how to deal with our C++ code calling our C code - does our C++ code only call the public API or does it have access to internal routines? 4) Any other suggestions? All these methods require us to have some strategy for dealing with freeing memory although it is often trivial, in some cases this can be complex and a strong candidate for future bugs and memory leaks. I'm really keen to hear people's thoughts on this. As I said I work in C++, so I don't know best practice in C for dealing with this, but we definitely need to make a call and have a definite strategy for this otherwise it will be a nightmare. Phil On 25 May 2015 at 21:06, Alan W. Irwin <ir...@be...> wrote: > On 2015-05-25 17:29-0000 Arjen Markus wrote: > >> An issue related to the use of C++ that has not been raised yet, but > which surfaced recently in my comprehensive testing efforts is the > fact that linking a C++ program requires a C++-enabled linker. Thus > introducing C++ as the language in which PLplot is to be implemented > would complicate the use of static builds. That may not be the most > common option nowadays, but I think we need to take a conscious > decision: do we want to continue to support static builds or not? One > pro for static builds is that they make deployment, especially of > binary-only distributions much easier (and safer). > > Hi Arjen: > > Yes, I think we should keep supporting static builds which work > virtually perfectly now with our CMake-based build systems for the > build of PLplot from source and the build of the installed examples. > > I assume what you mean by a C++-enabled linker is that extra libraries > have to be linked in for that case. Our CMake build handles this > situation with ease both for the core build and installed examples > build. So static linking is already completely supported in that case. > > Of course, there is currently a limitation on our traditional > (Makefile + pkg-config) build for e.g., Fortran examples where > pkg-config does not automatically know the name/location of the > special C++ library that needs to be linked in for a given C++ > compiler so the user would have to add that link himself to the > pkg-config results to successfully build the Fortran examples using > our traditional build system for the installed examples. > > Currently this problem occurs if C++ code is included in libplplot > from our C++ device drivers, psttf, qt, and wxwidgets. But that is a > very common case (or should be) to have those device drivers enabled > so if we adopt C++ for the core library this limitation in our > traditional build will essentially just stay the same in most cases. > So I don't see this limitation of our traditional build system for the > installed examples as a big concern with switching our core library to > C++ in all cases. > > Don't get me wrong, I would like this limitation to be resolved so > that our traditional build of the installed examples works as well as > the CMake build of those. When discussing this with Andrew I > mentioned one possibility for implementing a fix for this issue, but > that is a lot of work which I am going to leave to others if they want > to make our Makefile+pkg-config approach as high quality as the > CMake-based one for building our installed examples. > > Alan > > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state > implementation for stellar interiors (freeeos.sf.net); the Time > Ephemerides project (timeephem.sf.net); PLplot scientific plotting > software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); > and the Linux Brochure Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Plplot-devel mailing list > Plp...@li... > https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Alan W. I. <ir...@be...> - 2015-05-25 20:35:24
|
On 2015-05-25 17:08-0000 Arjen Markus wrote: > Hi Alan, > Right, that worked, that is: I updated the repository first, changed to the new branch and then applied my changes. So "git format-patch master" did the trick. I must have overlooked that possibility in the man page - it describes the options in extenso, but I did not see this one. Anyway, the patches are attached. Hi Arjen: I request two changes in the results: 1. Please use the --stdout option to git format-patch. That allows you to store the result in one file rather than a series of them. 2. Base your topic branch on work that is available to me from the SF repository. The reason I mention this is the files you deleted were not clean. They had some local changes you had made before deleting them so that applying the patches here failed. Here is an example, where I edited your deleted file patch to remove all the starting "-" characters so that it should have been identical with the file to be deleted. Instead, I got this diff between the two results: --- /tmp/test_software 2015-05-25 12:34:54.203951576 -0700 +++ bindings/f95/plstubs.h 2015-03-19 15:39:24.030125232 -0700 @@ -194,7 +194,7 @@ #define PLCTIME FNAME( PLCTIME, plctime ) #define PLEND FNAME( PLEND, plend ) #define PLEND1 FNAME( PLEND1, plend1 ) -#define PLENV FNAME( PLENVF95, plenvf95 ) +#define PLENV FNAME( PLENV, plenv ) #define PLENV0 FNAME( PLENV0, plenv0 ) #define PLEOP FNAME( PLEOP, pleop ) #define PLERRX FNAME( PLERRXF95, plerrxf95 ) @@ -241,7 +241,7 @@ #define PLIMAGEFR17 FNAME( PLIMAGEFR17, plimagefr17 ) #define PLIMAGEFR27 FNAME( PLIMAGEFR27, plimagefr27 ) #define PLIMAGEFR7 FNAME( PLIMAGEFR7, plimagefr7 ) -//#define PLINIT FNAME( PLINIT, plinit ) +#define PLINIT FNAME( PLINIT, plinit ) #define PLJOIN FNAME( PLJOIN, pljoin ) #define PLLAB7 FNAME( PLLAB7, pllab7 ) #define PLLEGEND_CNV_TEXT FNAME( PLLEGEND07_CNV_TEXT, pllegend07_cnv_text ) That is you had made a local change to the PLENV definition and commented out the PLINT definition before deleting plstubs.h. I am pretty sure the "git rm" command would have complained about your local uncommitted changes to plstubs.h. The only explanation I can think of is you are basing your current work on top of your previous f95-update work and then using git format-patch to extract just the latest few commits from the long range of commits from that older private work. Instead, you should start fresh and base your private f95_update tree on the current SF version of the master branch and then add your commits to that fresh f95_update tree before sending them to me in git format-patch form. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |
From: Alan W. I. <ir...@be...> - 2015-05-25 20:07:09
|
On 2015-05-25 17:29-0000 Arjen Markus wrote: > An issue related to the use of C++ that has not been raised yet, but which surfaced recently in my comprehensive testing efforts is the fact that linking a C++ program requires a C++-enabled linker. Thus introducing C++ as the language in which PLplot is to be implemented would complicate the use of static builds. That may not be the most common option nowadays, but I think we need to take a conscious decision: do we want to continue to support static builds or not? One pro for static builds is that they make deployment, especially of binary-only distributions much easier (and safer). Hi Arjen: Yes, I think we should keep supporting static builds which work virtually perfectly now with our CMake-based build systems for the build of PLplot from source and the build of the installed examples. I assume what you mean by a C++-enabled linker is that extra libraries have to be linked in for that case. Our CMake build handles this situation with ease both for the core build and installed examples build. So static linking is already completely supported in that case. Of course, there is currently a limitation on our traditional (Makefile + pkg-config) build for e.g., Fortran examples where pkg-config does not automatically know the name/location of the special C++ library that needs to be linked in for a given C++ compiler so the user would have to add that link himself to the pkg-config results to successfully build the Fortran examples using our traditional build system for the installed examples. Currently this problem occurs if C++ code is included in libplplot from our C++ device drivers, psttf, qt, and wxwidgets. But that is a very common case (or should be) to have those device drivers enabled so if we adopt C++ for the core library this limitation in our traditional build will essentially just stay the same in most cases. So I don't see this limitation of our traditional build system for the installed examples as a big concern with switching our core library to C++ in all cases. Don't get me wrong, I would like this limitation to be resolved so that our traditional build of the installed examples works as well as the CMake build of those. When discussing this with Andrew I mentioned one possibility for implementing a fix for this issue, but that is a lot of work which I am going to leave to others if they want to make our Makefile+pkg-config approach as high quality as the CMake-based one for building our installed examples. Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |
From: Chris M. <dev...@gm...> - 2015-05-25 19:35:45
|
The biggest problem I've had with C++ is the compiler/linker lock-in. Since there is no standard for name mangling... you are basically required to use the same compiler to link with as was compiled with. In principle, this can be avoided by exposing an extern C API---only. That seems to take more discipline than I have seen. The place where this is a pain is for MS Visual C++ version mingw on windows systems. --Chris On 5/25/2015 15:11, Jim Dishaw wrote: > >> On May 25, 2015, at 1:29 PM, Arjen Markus <Arj...@de... >> <mailto:Arj...@de...>> wrote: >> >> Hi everyone, >> An issue related to the use of C++ that has not been raised yet, but >> which surfaced recently in my comprehensive testing efforts is the >> fact that linking a C++ program requires a C++-enabled linker. Thus >> introducing C++ as the language in which PLplot is to be implemented >> would complicate the use of static builds. That may not be the most >> common option nowadays, but I think we need to take a conscious >> decision: do we want to continue to support static builds or not? One >> pro for static builds is that they make deployment, especially of >> binary-only distributions much easier (and safer). >> Regards, >> Arjen >> >> > > I don’t have a strong opinion either way on the C++ vs C and I think > we can achieve design objectives with either language. I do think we > run the risk of breaking compatibility with some percentage of the > current uses of PLplot. > > My biggest concern, however, is that switching to C++ will consume a > big chunk of time. If we are going to make that investment, then a > good scrub of the API should be done to make sure we do not handicap > ourselves. > > My top recommendation is that we appoint a manager of the legacy > PLplot to support the users during the transition. My gut feeling > PLplot 5 will be around for quite awhile. >> >> > -----Original Message----- >> > From: Hazen Babcock [mailto:hba...@ma...] >> > Sent: Monday, May 25, 2015 7:18 PM >> > To:plp...@li... >> <mailto:plp...@li...> >> > Subject: Re: [Plplot-devel] PLPlot 6 and C++ >> > >> > > >> > > On 2015-05-22 13:00+0100 Phil Rosenberg wrote: >> > > >> > >> Hi All >> > >> I mentioned this briefly during the previous release cycle, but we >> > >> all had more pressing things to deal with. Dealing with errors is >> > >> something we really need to get a hold on for PLPlot 6 and one of >> > >> the big drivers for considering a major API change. However, I am >> > >> certain that the reason why this has been a problem s that >> > >> propagating those errors back up through many layers of function >> > >> calls is tedious and error prone so it has been much simpler to >> simply call exit(). >> > >> >> > >> Here is one possible solution. We switch the core code to C++ >> with a C frontend. >> > >> >> > > >> > > However, let's be clear here the cost of that benefit is likely to be >> > > considerably higher than you are estimating now. >> > > For example, there is going to be substantial costs in terms of >> > > doxygen and DocBook documentation, and also the domestic bindings and >> > > foreign bindings (e.g., the PDL and Lisp bindings for PLplot) are of >> > > some concern. Of course, in theory you could write a perfect C >> > > wrapper library for the C++ core so bindings on that C wrapper work >> > > just as well as they do for PLplot 5. But that C wrapper would >> really >> > > have to be perfect and would make the bindings less efficient. Note, >> > > I don't want to emphasize that last point too much since reliability >> > > is much more important to me than speed. But at some point, I assume >> > > you will want to address that issue (e.g., for the swig-generated >> > > bindings) by ignoring the C wrapper and binding directly to the C++ >> > > core instead which adds to the cost of this change. >> > >> > I agree with Alan that it is not immediately obvious that this >> approach will save a lot >> > of effort. All the API functions are going to have to be modified >> anyway, even if only >> > to return some sort of no error code. This in turn will mean >> updating all the examples >> > and a lot of the documentation. So the additional saving might be >> something like 10% >> > on top of this? A C++ solution would also involve adding "extern" >> to every part of the >> > API that we want to expose. >> > >> > It could be argued that the exercise of dealing with the >> propagation issues might help >> > to clean up issues that were otherwise just papered over. I think >> we'd have to do this >> > anyway in order to take advantage of your proposal to use the out >> of scope array >> > deletion feature of C++, as every array in PLplot is now going to >> have to be a C++ >> > object. This is also going to be a lot of work, though we'd face >> similar cleanup issues >> > in C. By my count there are currently 146 calls to plexit() in >> PLplot core, most of >> > which are related to memory allocation. >> > >> > So I lean slightly towards staying with C, but if others think that >> C++ is the way to go, >> > then that is ok with me and I'll try to help with things like >> updating the examples, >> > documentation, etc. >> > >> > Some questions: >> > 1. C++ compilers are universal? Are there any major platforms that >> we want to >> > support that do not have readily available C++ support? >> > >> > 2. In my readings on this subject some have mentioned that a C++ >> compiler can be >> > substantially slower? Is this true? Significant? >> > >> > -Hazen >> > >> > >> > >> ------------------------------------------------------------------------------ >> > One dashboard for servers and applications across >> Physical-Virtual-Cloud Widest >> > out-of-the-box monitoring support with 50+ applications Performance >> metrics, stats >> > and reports that give you Actionable Insights Deep dive visibility >> with transaction >> > tracing using APM Insight. >> >http://ad.doubleclick.net/ddm/clk/290420510;117567292;y >> > _______________________________________________ >> > Plplot-devel mailing list >> > Plp...@li... >> <mailto:Plp...@li...> >> >https://lists.sourceforge.net/lists/listinfo/plplot-devel >> >> DISCLAIMER: This message is intended exclusively for the addressee(s) >> and may contain confidential and privileged information. If you are >> not the intended recipient please notify the sender immediately and >> destroy this message. Unauthorized use, disclosure or copying of this >> message is strictly prohibited. The foundation 'Stichting Deltares', >> which has its seat at Delft, The Netherlands, Commercial Registration >> Number 41146461, is not liable in any way whatsoever for consequences >> and/or damages resulting from the improper, incomplete and untimely >> dispatch, receipt and/or content of this e-mail. >> ------------------------------------------------------------------------------ >> One dashboard for servers and applications across Physical-Virtual-Cloud >> Widest out-of-the-box monitoring support with 50+ applications >> Performance metrics, stats and reports that give you Actionable Insights >> Deep dive visibility with transaction tracing using APM Insight. >> http://ad.doubleclick.net/ddm/clk/290420510;117567292;y_______________________________________________ >> Plplot-devel mailing list >> Plp...@li... >> <mailto:Plp...@li...> >> https://lists.sourceforge.net/lists/listinfo/plplot-devel > > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > > > _______________________________________________ > Plplot-devel mailing list > Plp...@li... > https://lists.sourceforge.net/lists/listinfo/plplot-devel |
From: Jim D. <ji...@di...> - 2015-05-25 19:27:22
|
> On May 21, 2015, at 11:21 PM, Alan W. Irwin <ir...@be...> wrote: > > Hi Lewis: > > I am going to use a new subject line for this question for obvious > reasons. > > On 2015-05-21 17:51-0500 J. Lewis Muir wrote: > >> P.S. Even though the problem I reported appears to have been fixed, >> my build did not succeed, so I'm guessing I hit a new issue. The >> build failed as indicated below. Unfortunately, I don't have time to >> investigate, so this is really just an FYI. If you have people testing >> on OS X and things work for them, then it may be something related to my >> setup or building in pkgsrc. Or maybe you don't have people testing on >> OS X, and this is something that you'll eventually run into on OS X at >> some point before you make an official 5.11.1 release. This is on OS X >> Yosemite 10.10.3. > > The fact is we have suffered for a very long time from a lack of > testing on the OS X platform so some build bugs may be currently > present on that platform. However, if those exist it should be > absolutely straightforward to fix them once we know exactly what they > are. I do know some of our developers here have access to that > platform so I hope your report inspires them to do some testing on > that platform starting with using the VERBOSE=1 option for the make > command to find out exactly how our CMake-based build system > instructions translate to actual build commands on OS X. > > If some developer here with OS X build expertise is interested in > testing on that platform, my additional advice is to look at our > current override file which says (based on Mac OS X knowledge from > nearly a decade ago!) > I just built from the current repository on a Mac (OS X 10.10.3) using the steps from the documentation and did not get any errors. My build environment: CMake version 3.2.2 from MacPorts CC Apple LLVM version 6.1.0 (clang-602.0.53) (based on LLVM 3.6.0svn) Target: x86_64-apple-darwin14.3.0 Lewis, what is your build environment? |
From: Jim D. <ji...@di...> - 2015-05-25 19:11:51
|
> On May 25, 2015, at 1:29 PM, Arjen Markus <Arj...@de...> wrote: > > Hi everyone, > > An issue related to the use of C++ that has not been raised yet, but which surfaced recently in my comprehensive testing efforts is the fact that linking a C++ program requires a C++-enabled linker. Thus introducing C++ as the language in which PLplot is to be implemented would complicate the use of static builds. That may not be the most common option nowadays, but I think we need to take a conscious decision: do we want to continue to support static builds or not? One pro for static builds is that they make deployment, especially of binary-only distributions much easier (and safer). > > Regards, > > Arjen > > I don’t have a strong opinion either way on the C++ vs C and I think we can achieve design objectives with either language. I do think we run the risk of breaking compatibility with some percentage of the current uses of PLplot. My biggest concern, however, is that switching to C++ will consume a big chunk of time. If we are going to make that investment, then a good scrub of the API should be done to make sure we do not handicap ourselves. My top recommendation is that we appoint a manager of the legacy PLplot to support the users during the transition. My gut feeling PLplot 5 will be around for quite awhile. > > -----Original Message----- > > From: Hazen Babcock [mailto:hba...@ma... <mailto:hba...@ma...>] > > Sent: Monday, May 25, 2015 7:18 PM > > To: plp...@li... <mailto:plp...@li...> > > Subject: Re: [Plplot-devel] PLPlot 6 and C++ > > > > > > > > On 2015-05-22 13:00+0100 Phil Rosenberg wrote: > > > > > >> Hi All > > >> I mentioned this briefly during the previous release cycle, but we > > >> all had more pressing things to deal with. Dealing with errors is > > >> something we really need to get a hold on for PLPlot 6 and one of > > >> the big drivers for considering a major API change. However, I am > > >> certain that the reason why this has been a problem s that > > >> propagating those errors back up through many layers of function > > >> calls is tedious and error prone so it has been much simpler to simply call exit(). > > >> > > >> Here is one possible solution. We switch the core code to C++ with a C frontend. > > >> > > > > > > However, let's be clear here the cost of that benefit is likely to be > > > considerably higher than you are estimating now. > > > For example, there is going to be substantial costs in terms of > > > doxygen and DocBook documentation, and also the domestic bindings and > > > foreign bindings (e.g., the PDL and Lisp bindings for PLplot) are of > > > some concern. Of course, in theory you could write a perfect C > > > wrapper library for the C++ core so bindings on that C wrapper work > > > just as well as they do for PLplot 5. But that C wrapper would really > > > have to be perfect and would make the bindings less efficient. Note, > > > I don't want to emphasize that last point too much since reliability > > > is much more important to me than speed. But at some point, I assume > > > you will want to address that issue (e.g., for the swig-generated > > > bindings) by ignoring the C wrapper and binding directly to the C++ > > > core instead which adds to the cost of this change. > > > > I agree with Alan that it is not immediately obvious that this approach will save a lot > > of effort. All the API functions are going to have to be modified anyway, even if only > > to return some sort of no error code. This in turn will mean updating all the examples > > and a lot of the documentation. So the additional saving might be something like 10% > > on top of this? A C++ solution would also involve adding "extern" to every part of the > > API that we want to expose. > > > > It could be argued that the exercise of dealing with the propagation issues might help > > to clean up issues that were otherwise just papered over. I think we'd have to do this > > anyway in order to take advantage of your proposal to use the out of scope array > > deletion feature of C++, as every array in PLplot is now going to have to be a C++ > > object. This is also going to be a lot of work, though we'd face similar cleanup issues > > in C. By my count there are currently 146 calls to plexit() in PLplot core, most of > > which are related to memory allocation. > > > > So I lean slightly towards staying with C, but if others think that C++ is the way to go, > > then that is ok with me and I'll try to help with things like updating the examples, > > documentation, etc. > > > > Some questions: > > 1. C++ compilers are universal? Are there any major platforms that we want to > > support that do not have readily available C++ support? > > > > 2. In my readings on this subject some have mentioned that a C++ compiler can be > > substantially slower? Is this true? Significant? > > > > -Hazen > > > > > > ------------------------------------------------------------------------------ > > One dashboard for servers and applications across Physical-Virtual-Cloud Widest > > out-of-the-box monitoring support with 50+ applications Performance metrics, stats > > and reports that give you Actionable Insights Deep dive visibility with transaction > > tracing using APM Insight. > > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y <http://ad.doubleclick.net/ddm/clk/290420510;117567292;y> > > _______________________________________________ > > Plplot-devel mailing list > > Plp...@li... > > https://lists.sourceforge.net/lists/listinfo/plplot-devel <https://lists.sourceforge.net/lists/listinfo/plplot-devel>DISCLAIMER: This message is intended exclusively for the addressee(s) and may contain confidential and privileged information. If you are not the intended recipient please notify the sender immediately and destroy this message. Unauthorized use, disclosure or copying of this message is strictly prohibited. The foundation 'Stichting Deltares', which has its seat at Delft, The Netherlands, Commercial Registration Number 41146461, is not liable in any way whatsoever for consequences and/or damages resulting from the improper, incomplete and untimely dispatch, receipt and/or content of this e-mail. ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud > Widest out-of-the-box monitoring support with 50+ applications > Performance metrics, stats and reports that give you Actionable Insights > Deep dive visibility with transaction tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y_______________________________________________ <http://ad.doubleclick.net/ddm/clk/290420510;117567292;y_______________________________________________> > Plplot-devel mailing list > Plp...@li... <mailto:Plp...@li...> > https://lists.sourceforge.net/lists/listinfo/plplot-devel <https://lists.sourceforge.net/lists/listinfo/plplot-devel> |
From: Arjen M. <Arj...@de...> - 2015-05-25 17:29:30
|
Hi everyone, An issue related to the use of C++ that has not been raised yet, but which surfaced recently in my comprehensive testing efforts is the fact that linking a C++ program requires a C++-enabled linker. Thus introducing C++ as the language in which PLplot is to be implemented would complicate the use of static builds. That may not be the most common option nowadays, but I think we need to take a conscious decision: do we want to continue to support static builds or not? One pro for static builds is that they make deployment, especially of binary-only distributions much easier (and safer). Regards, Arjen > -----Original Message----- > From: Hazen Babcock [mailto:hba...@ma...] > Sent: Monday, May 25, 2015 7:18 PM > To: plp...@li... > Subject: Re: [Plplot-devel] PLPlot 6 and C++ > > > > > On 2015-05-22 13:00+0100 Phil Rosenberg wrote: > > > >> Hi All > >> I mentioned this briefly during the previous release cycle, but we > >> all had more pressing things to deal with. Dealing with errors is > >> something we really need to get a hold on for PLPlot 6 and one of > >> the big drivers for considering a major API change. However, I am > >> certain that the reason why this has been a problem s that > >> propagating those errors back up through many layers of function > >> calls is tedious and error prone so it has been much simpler to simply call exit(). > >> > >> Here is one possible solution. We switch the core code to C++ with a C frontend. > >> > > > > However, let's be clear here the cost of that benefit is likely to be > > considerably higher than you are estimating now. > > For example, there is going to be substantial costs in terms of > > doxygen and DocBook documentation, and also the domestic bindings and > > foreign bindings (e.g., the PDL and Lisp bindings for PLplot) are of > > some concern. Of course, in theory you could write a perfect C > > wrapper library for the C++ core so bindings on that C wrapper work > > just as well as they do for PLplot 5. But that C wrapper would really > > have to be perfect and would make the bindings less efficient. Note, > > I don't want to emphasize that last point too much since reliability > > is much more important to me than speed. But at some point, I assume > > you will want to address that issue (e.g., for the swig-generated > > bindings) by ignoring the C wrapper and binding directly to the C++ > > core instead which adds to the cost of this change. > > I agree with Alan that it is not immediately obvious that this approach will save a lot > of effort. All the API functions are going to have to be modified anyway, even if only > to return some sort of no error code. This in turn will mean updating all the examples > and a lot of the documentation. So the additional saving might be something like 10% > on top of this? A C++ solution would also involve adding "extern" to every part of the > API that we want to expose. > > It could be argued that the exercise of dealing with the propagation issues might help > to clean up issues that were otherwise just papered over. I think we'd have to do this > anyway in order to take advantage of your proposal to use the out of scope array > deletion feature of C++, as every array in PLplot is now going to have to be a C++ > object. This is also going to be a lot of work, though we'd face similar cleanup issues > in C. By my count there are currently 146 calls to plexit() in PLplot core, most of > which are related to memory allocation. > > So I lean slightly towards staying with C, but if others think that C++ is the way to go, > then that is ok with me and I'll try to help with things like updating the examples, > documentation, etc. > > Some questions: > 1. C++ compilers are universal? Are there any major platforms that we want to > support that do not have readily available C++ support? > > 2. In my readings on this subject some have mentioned that a C++ compiler can be > substantially slower? Is this true? Significant? > > -Hazen > > > ------------------------------------------------------------------------------ > One dashboard for servers and applications across Physical-Virtual-Cloud Widest > out-of-the-box monitoring support with 50+ applications Performance metrics, stats > and reports that give you Actionable Insights Deep dive visibility with transaction > tracing using APM Insight. > http://ad.doubleclick.net/ddm/clk/290420510;117567292;y > _______________________________________________ > Plplot-devel mailing list > Plp...@li... > https://lists.sourceforge.net/lists/listinfo/plplot-devel DISCLAIMER: This message is intended exclusively for the addressee(s) and may contain confidential and privileged information. If you are not the intended recipient please notify the sender immediately and destroy this message. Unauthorized use, disclosure or copying of this message is strictly prohibited. The foundation 'Stichting Deltares', which has its seat at Delft, The Netherlands, Commercial Registration Number 41146461, is not liable in any way whatsoever for consequences and/or damages resulting from the improper, incomplete and untimely dispatch, receipt and/or content of this e-mail. |
From: Hazen B. <hba...@ma...> - 2015-05-25 17:18:28
|
> > On 2015-05-22 13:00+0100 Phil Rosenberg wrote: > >> Hi All >> I mentioned this briefly during the previous release cycle, but we all >> had more pressing things to deal with. Dealing with errors is >> something we really need to get a hold on for PLPlot 6 and one of the >> big drivers for considering a major API change. However, I am certain >> that the reason why this has been a problem s that propagating those >> errors back up through many layers of function calls is tedious and >> error prone so it has been much simpler to simply call exit(). >> >> Here is one possible solution. We switch the core code to C++ with a C frontend. >> > > However, let's be clear here the cost of that benefit is > likely to be considerably higher than you are estimating now. > For example, there is going to be substantial costs in terms of > doxygen and DocBook documentation, and also the domestic bindings and > foreign bindings (e.g., the PDL and Lisp bindings for PLplot) are of > some concern. Of course, in theory you could write a perfect C wrapper > library for the C++ core so bindings on that C wrapper work just as > well as they do for PLplot 5. But that C wrapper would really > have to be perfect and would make the bindings less efficient. Note, > I don't want to emphasize that last point too much since reliability > is much more important to me than speed. But at some point, I assume > you will want to address that issue (e.g., for the swig-generated > bindings) by ignoring the C wrapper and binding directly to the C++ > core instead which adds to the cost of this change. I agree with Alan that it is not immediately obvious that this approach will save a lot of effort. All the API functions are going to have to be modified anyway, even if only to return some sort of no error code. This in turn will mean updating all the examples and a lot of the documentation. So the additional saving might be something like 10% on top of this? A C++ solution would also involve adding "extern" to every part of the API that we want to expose. It could be argued that the exercise of dealing with the propagation issues might help to clean up issues that were otherwise just papered over. I think we'd have to do this anyway in order to take advantage of your proposal to use the out of scope array deletion feature of C++, as every array in PLplot is now going to have to be a C++ object. This is also going to be a lot of work, though we'd face similar cleanup issues in C. By my count there are currently 146 calls to plexit() in PLplot core, most of which are related to memory allocation. So I lean slightly towards staying with C, but if others think that C++ is the way to go, then that is ok with me and I'll try to help with things like updating the examples, documentation, etc. Some questions: 1. C++ compilers are universal? Are there any major platforms that we want to support that do not have readily available C++ support? 2. In my readings on this subject some have mentioned that a C++ compiler can be substantially slower? Is this true? Significant? -Hazen |
From: Arjen M. <Arj...@de...> - 2015-05-25 17:08:23
|
Hi Alan, Right, that worked, that is: I updated the repository first, changed to the new branch and then applied my changes. So "git format-patch master" did the trick. I must have overlooked that possibility in the man page - it describes the options in extenso, but I did not see this one. Anyway, the patches are attached. Regards, Arjen > -----Original Message----- > From: Alan W. Irwin [mailto:ir...@be...] > Sent: Monday, May 25, 2015 6:33 PM > To: Arjen Markus > Cc: PLplot development list > Subject: RE: A possible design of our new Fortran binding > > On 2015-05-25 12:38-0000 Arjen Markus wrote: > > > Hi Alan, > > > > > > > >> -----Original Message----- > >> From: Alan W. Irwin [mailto:ir...@be...] > >> > >> Of course, if I am going to contribute to this development I do need > >> git access to your private topic branch work using the usual "git > >> format-patch" and "git am" method that is described in > >> README.developers. So for now I suggest your first priority would be to present > exactly what you have given us here as a tarball in "git format-patch" > >> form instead with no further changes (except possibly not including > >> the x00f.f90 change since I would just revert that, see above). And > >> that solid git start with just your present work and no further > >> except possibly excluding the x00f.f90 change would allow us to > >> develop this private topic from there. Of course, I am aware you have > >> had some problems with using "git format-patch" in the past, but if > >> those continue let me know here or off list, and I think I should be able to give you > an exact cookbook of what to do to get started. > >> > > I must be overlooking something, because whatever I do with "git format-patch" it > invariably produces no patch at all. > > > > > I have made my changes on a separate branch, tried to produce a patch > > file (nothing happened) committed them to the local repository and tried to produce > a patch file (nothing continued to happen). > > Hi Arjen: > > I assume from what you said above that your current situation is a master branch > (presumably the same as the one at SF, but if you are behind in updating that with > the SF variant it doesn't matter) and private branch (let's call it f95update) that is > identical to the master branch except that the f95update branch has some extra > commits. > To verify that situation use the "git log --oneline" command, e.g., > > git log --oneline -4 master > git log --oneline -7 f95update > > which respectively show the last 4 commits in master and last 7 commits in > f95update where you specified 7 because you were pretty sure there were 3 extra > commits in the f95update branch. > > So let's assume those commands show you that your situation is > > master: ... A-B-C > \ > f95update: D-E-F > > where D-E-F are the extra commits you want to package with format-patch. > > Then from "git help format-patch" I would try > > git checkout f95update > git format-patch master > > where "master" refers to the commit ID C at the tip of the master branch where you > branched f95update from master. > > For a more complicated example, suppose you have kept up with ordinary > development from the rest of us and updated the master branch after you created > f95update. > > Then the situation has changed to > > master: ... A-B-C-R-S-T-U-V-W-X-Y-Z > \ > f95update: D-E-F > > In this case to collect D-E-F in a format patch you simply run > > git checkout f95update > git format-patch D^ > > where D^ is notation for the ancestor of D which is the same as C. Or you also have > the option of rebasing so you end up effectively with > > master: ... A-B-C-R-S-T-U-V-W-X-Y-Z > \ > f95update: G-H-I > > where I have used different letters for the 3 extra commits to show that the rebase > has changed those commits (since each commit refers to a complete file system > snapshot) from what they were before. But in this case > > git checkout f95update > git format-patch G^ > > would give you the same patch file (except for the commit identity associated with > each of the 3 commits) that you produced before. > > I hope these examples help to clarify what goes on with git format-patch, but the > principal message I want you to take away from this is always to use "git help > <commandname>" first whenever you are having trouble figuring out the syntax of > some git command. > > Alan > __________________________ > Alan W. Irwin > > Astronomical research affiliation with Department of Physics and Astronomy, > University of Victoria (astrowww.phys.uvic.ca). > > Programming affiliations with the FreeEOS equation-of-state implementation for > stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); > PLplot scientific plotting software package (plplot.sf.net); the libLASi project > (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure > Project (lbproject.sf.net). > __________________________ > > Linux-powered Science > __________________________ DISCLAIMER: This message is intended exclusively for the addressee(s) and may contain confidential and privileged information. If you are not the intended recipient please notify the sender immediately and destroy this message. Unauthorized use, disclosure or copying of this message is strictly prohibited. The foundation 'Stichting Deltares', which has its seat at Delft, The Netherlands, Commercial Registration Number 41146461, is not liable in any way whatsoever for consequences and/or damages resulting from the improper, incomplete and untimely dispatch, receipt and/or content of this e-mail. |
From: Alan W. I. <ir...@be...> - 2015-05-25 16:41:25
|
On 2015-05-25 09:33-0700 Alan W. Irwin wrote: > Or > you also have the option of rebasing so you end up effectively with > > master: ... A-B-C-R-S-T-U-V-W-X-Y-Z > \ > f95update: G-H-I > > where I have used different letters for the 3 extra commits to show > that the rebase has changed those commits (since each commit refers to > a complete file system snapshot) from what they were before. But in > this case > > git checkout f95update > git format-patch G^ > > would give you the same patch file (except for the commit identity > associated with each of the 3 commits) that you produced before. Oops. I should have added the same patch file would be produced assuming the rebase process is a clean one (i.e., there are no conflicts to resolve between your changes and the additional changes done by rebasing on master). Alan __________________________ Alan W. Irwin Astronomical research affiliation with Department of Physics and Astronomy, University of Victoria (astrowww.phys.uvic.ca). Programming affiliations with the FreeEOS equation-of-state implementation for stellar interiors (freeeos.sf.net); the Time Ephemerides project (timeephem.sf.net); PLplot scientific plotting software package (plplot.sf.net); the libLASi project (unifont.org/lasi); the Loads of Linux Links project (loll.sf.net); and the Linux Brochure Project (lbproject.sf.net). __________________________ Linux-powered Science __________________________ |