openlilylib-user Mailing List for openLilyLib (Page 10)
Resources for LilyPond and LaTeX users writing (about) music
Status: Alpha
Brought to you by:
u-li-1973
You can subscribe to this list here.
2013 |
Jan
|
Feb
|
Mar
(45) |
Apr
(38) |
May
|
Jun
(4) |
Jul
(1) |
Aug
|
Sep
(10) |
Oct
|
Nov
|
Dec
|
---|---|---|---|---|---|---|---|---|---|---|---|---|
2014 |
Jan
(164) |
Feb
(4) |
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
2016 |
Jan
(1) |
Feb
|
Mar
|
Apr
|
May
|
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Urs L. <ul...@op...> - 2013-03-29 22:20:19
|
Hi Jan, thanks for the feedback. On Fri, 29 Mar 2013 18:29:04 +0100 Janek Warchoł <lem...@gm...> wrote: > Hi, > > I'm not sure about the mailing list; Currently I tend to keep the mailing list and the project front page on SourceForge. The plain old mailman list seems appropriate. And having that project page as one landing page shouldn't be bad either. > as for file downloads i think zipped > branches would be enough. No. They may be used by a developer who isn't yet evangelized by Git ;-) But they aren't ready for end-users because they aren't organized, presumably contain unnecessary files and _don't_ contain any binary files, particularly the manuals. So I think we have to make regular download packages available through the web site. This means that there will be binary files in the web site repository, but only those that are ready for release (they will be added right before release) - this doesn't mean we have to track binary files 'in development'. > I agree with your other conclusions. Thanks. But I'll reconsider the website issue. GitHub's Jekyll is very blog-centric (as mentioned), and 'talking it into' creating hierarchical websites would probably include writing (simple) plugins - that can't be used, as Jekyll is run on the server (which seemed like an advantage initially). As I have come to like the idea of 'static site generators' (that I hadn't known before) I will consider using one of them that works locally, isn't blog-centric and written in Python. Best Urs |
From: Janek W. <lem...@gm...> - 2013-03-29 17:29:33
|
Hi, I'm not sure about the mailing list; as for file downloads i think zipped branches would be enough. I agree with your other conclusions. Janek 2013/3/28 Urs Liska <ul...@op...> > Hi, > > after my first experiences with SourceForge I have to say I'm quite > disappointed and will soon revert the move to that platform: > - The web site is rather clumsy and unresponsive > - The product is quite buggy and they are struggling with permission > issues: > - I couldn't delete a subproject, and a staff member with higher > permission had to do that > - Updating things often isn't propagated to the site > (for example reordering the menu or removing entries from it often > didn't work) > - Project descriptions and/or feature list weren't displayed on the > project main page > - Code hosting is by far inferior to other providers > - The commit browser isn't really interesting - especially compared > to GitHub's network graph > - Pull requests are possible (although still undocumented!) but very > awkward: > You can't process them online but have to pull them first before > you can see their content. > One of my two trials didn't even work at all because Git refused to > merge > (not because of merge conflicts but because what I got to fetch was > "nothing we can merge") ... > > That last point was the show-stopper for me. Pull requests are really > indispensable for a project like this: I want as many contributors as > possible, but of course we can't give everybody push access. > > So we'll have to change our options once more. > It seems we won't get a 'one-stop-shop' now, but OK, who has it? > (LilyPond doesn't, for example) > I will make a few announcements and ask for voting on a few open > decisions now: > > Code hosting > ======== > As the main provider for hosting I see GitHub and BitBucket. > While their offerings are quite similar I decided to go for GitHub for > the following reasons: > - The web experience is even better with GitHub (esp. for the network > graph) > - you can actually edit files through the web interface (which is handy > for README files, for example) > - The issue tracker is slightly more configurable (although I don't like > it too much) > - You can actually edit pull requests online (although BitBucket's > implementation is very good too) > - See below for the project website issue > > Web site > ======== > Other than SourceForge, both competitors only offer static web sites to > be served. > But GitHub offers a tool called Jekyll which seems a good alternative. > Jekyll is a Ruby script that takes a set of templates and content files > (similar to stacey) and renders them into a static web site before > pushing it to a web server. Which of course results in fast, reliable > and secure web sites. > Of course GitHub offers that with special features: The project web site > resides in a dedicated Git repository, and whenever something is pushed > to the master branch, Jekyll processes the content on the GitHub server, > so from the point of view of a content author it actually behaves like a > server-side CMS. > This is far better than the handling on SourceForge, where one has to > upload the web content separately. > The only drawback I see so far is that Jekyll is _very_ blog-centric, so > it may become somewhat awkward to talk it into creating a traditional > web site with hierarchical navigation. > > The domain redirect will work as with SourceForge, i.e. the web site > will be accessible through the domain. > > Issues > ======= > As mentioned I don't like the GitHub issue tracker that much, but I > think we can live with that. > Integration of Git pushes with issues is a nice thing SourceForge > doesn't offer (although BitBucket is far more versatile in this respect). > > File downloads > ======= > GitHub has discontinued its download pages (because they want to > concentrate on _code_), but one can always download a complete checked > out branch as a zip file. > I think we can do without SourceForge's extensive download area and just > provide our downloads as part of the Web site. > This means we'll have to include the binaries in the website repository, > but maybe that's ok. > > Or do you think we should put binary release files somewhere else (e.g. > on a ftp server)? > > Mailing list > ======= > That's the only thing we won't get from GitHub. > I see two options and would like you to vote: > a) keep the SourceForge project only for the mailing list. > I can imagine having a project page on SourceForge that just links > to our other resources > could be good in order to be found more easily. > Although it _does_ imply some overhead. > b) Recreate the Google group (sorry, Jan-Peter ;-) ) > > Best > Urs > > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user > -------------- next part -------------- An HTML attachment was scrubbed... |
From: Joseph R. W. <jos...@we...> - 2013-03-28 18:29:25
|
On 03/28/2013 07:01 PM, Urs Liska wrote: > I think there is something to that, otherwise we'd have a similar > situation with LaTeX packages, isn't it? Aren't most LaTeX packages licensed differently from GPL? > The GPL doesn't say that the _usage result_ of GPLed software must be GPLed. > And although .ly files are source code, I'd consider them rather > documents than software. I think there's a level of truth to that and also to David's noting that there is not actually a combined entity being formed. But I still find the situation far too ambiguous for my taste, and tweaking the licence (or adding a clear exception) might be the easiest way to resolve that. |
From: Urs L. <ul...@op...> - 2013-03-28 18:02:24
|
Am 28.03.2013 17:07, schrieb Joseph Rushton Wakeling: > .. > > Looking into that code, I realized there is another issue. Your stated licence > is GPL. How is that meant to interact with the case where e.g. I write an > original piece of music and use OLLib's toolbox in my Lilypond source file? I > am reasonably sure that if I release only the resulting PDF, there would not be > an issue, but if I want to distribute the .ly source file(s) then almost > certainly GPL requirements would kick in and would therefore force me to give a > GPL licence to my piece of music. > > This seems to me to be unacceptable overreach on the part of the licensing. It > may be worth raising this on the LP lists as it probably also applies to LP > \include's. > Just one thought (don't have more time now): Could it be that the lilypond source files that include OLLib or any other files from the LIlyPond distribution could be regarded as 'documents' so that the GPL doesn't apply in the way you suggested? I think there is something to that, otherwise we'd have a similar situation with LaTeX packages, isn't it? The GPL doesn't say that the _usage result_ of GPLed software must be GPLed. And although .ly files are source code, I'd consider them rather documents than software. ??? |
From: Joseph R. W. <jos...@we...> - 2013-03-28 17:51:07
|
On 03/28/2013 05:59 PM, Urs Liska wrote: > Why this overhead of having a remote repository and its submodule 'mirror'? > And the caveats I read about in the progit book don't look reassuring > either. Not familiar with the remarks in the progit book, but the logic of submodules as I see it is that you can divide your codebase into separately versioned parts which can then be incorporated into multiple different projects. In the example I gave you, the LLVM-based D compiler, it makes sense to have the standard library as a submodule because that way contributors on different D compilers can contribute to and pull from the latest version of that standard library regardless of what compiler they're personally using. > Not really problems, its rather the complexity of the amount of repos. > An example: If I work on, say, the manual for 'musicexamples' I may run > into updating the stylesheet package. If everything is in one repository > I will surely notice the need for committing this change, at least if I > want to power down the computer and see that git status isn't clean. Wouldn't that kind of issue suggest that your repos are improperly modularized? For example, if an individual tutorial needs a stylesheet change, it might make sense to allow tutorials to have custom local stylesheets that are used alongside the global one. And if, reviewing the merge of a tutorial patch, there is a stylesheet change, it could be consciously decided to make that change to the global stylesheet instead. Put it another way -- if you have to make changes to the global stylesheet often, then your global stylesheet needs rethinking anyway. Preventing you from habitually making stylesheet changes along with tutorial changes is probably a good way to enforce good discipline of design. > If the files were in separate repositories I would have to take care > myself that I don't forget to update also the repo with the stylesheet in. > Or: Each time I have to change a computer I have to go through all the > repos to check if there are changes to be fetched and merged (or > rebased). That's quite tedious and requires a fair amount of > concentration in order not to omit a branch, for example. It _should_ be as simple as git pull, git submodule update -- one more command than you have to run anyway. > Maybe you're right. But: As it is now I have to tell the contributor: > "clone the repository and make sure that LaTeX finds this folder and > LilyPond finds this folder. You may have stuff there that you don't > need, but you get everything you need with one clone." If there were > smaller chunks I'd have to say: "You may clone the module you want to > work on, but you will have to download and install the musicexamples and > lilyglyphs packages in order to compile the docs. Well, you should still be able to do that with a master project and submodules. But if you're talking about a manual or a tutorial it's clear that you have a strong dependency that requires everything. If you're talking about just contributing to lilyglyphs, that _could_ be largely independent of everything else, if you design things right. For example, a lot of your sense of dependency between docs and code/LaTeX packages seems to stem from your concept of docs as a large-scale manual. But if instead you have micro-documentation for the individual toolboxes and packages (as you see with the D standard library), you shouldn't need that strong interdependency. Then you can still _have_ the large-scale manual, but as a separate project that builds on top of the micro-docs. |
From: Urs L. <ul...@op...> - 2013-03-28 17:18:31
|
Am 28.03.2013 17:07, schrieb Joseph Rushton Wakeling: > On 03/28/2013 02:56 PM, Urs Liska wrote: >> To be clear (maybe I suspect some misunderstanding here): OLLib is a >> 'pure' LilyPond library. The user will write \include "OLLib.ily" (after >> making the path accessible) and then can access all the function and >> commands we provide in there. > Just as a small note, I'd recommend only allowing lowercase directory and file > names. This reflects typical practice for development and libraries across just > about every programming and/or markup language I've ever dealt with and also > helps to avoid potential issues with different case sensitivity rules across OS's. OK, I'll do that and make it a test case of my branching strategy ;-) > >> In addition he may include units from the >> OLLincludes subdirectory (e.g. engravers) or he may inspect and use >> material from the OLLexamples and OLLtemplates directories. > I recommend trying to modularize as much as possible so that users can \include > the minimally necessary parts of OLLib. Boost is a good example here. Hm, that's what I originally intended (and what is actually still there: each toolbox folder that actually has content has a ...toolbox.ily file that can be included separately). But Jan-Peter (and to some extent Janek) seemed to prefer an approach where the user simply includes the library as a whole. As we're not fixed yet about the primary entry point strategy we can still discuss this (but in a separate thread please. And maybe later - maybe we should invite a few more people explicitely to participate in the current set of discussion items? > >> Functions we implement here _may_ be added in the future to LilyPond, if >> we and the LilyPond developers consider them general and important (and >> stable) enough. In the past we developed the \shape function in our >> library, before David N managed to get it in the main code base. > I think it would be good to have a focus on review, revision and standardization > with the aim of being a more flexible test-ground for LP proper. Of course > there need be no promises, some stuff may always remain outside. FWIW I think > your chosen pianoStaff example could probably be revised and generalized > sufficiently to be useful in LP proper, it's only a matter of time and thought. > One possible way would be to replace SUp and SDown meaning, use the staff named > "up" or "down" with, say, \staffUp and \staffDown meaning, switch to the staff > that is above/below the current one. Then you need make no assumptions about > staff naming. That's a good idea (the general one), but it also needs some more thoughts (see my previous comment) > > Looking into that code, I realized there is another issue. Your stated licence > is GPL. How is that meant to interact with the case where e.g. I write an > original piece of music and use OLLib's toolbox in my Lilypond source file? I > am reasonably sure that if I release only the resulting PDF, there would not be > an issue, but if I want to distribute the .ly source file(s) then almost > certainly GPL requirements would kick in and would therefore force me to give a > GPL licence to my piece of music. > > This seems to me to be unacceptable overreach on the part of the licensing. It > may be worth raising this on the LP lists as it probably also applies to LP > \include's. _If_ that's the case that's in fact inacceptable. Looks like the perspective to another long running thread on lilypond-user ... > >> I'm not really sure if I understand you correctly. Maybe it's more to >> the point to ask about the relation between OLLib documentation and >> tutorials? > Well, I think it's worth mapping out very clearly what kinds of documentation > are envisioned and how they interrelate -- and modularizing appropriately. For > example, tutorials should probably not be bundled with library documentation. Definitely. > >> There will be one 'book' documenting OLLib. This is traditional >> documentation. >> And there will be tutorials that are made accessible on the web site. >> They aren't conceptually related to OLLib, although they may of course >> use it and may dwell on certain aspects of it. >> >> Is that clearer? > It's clearer but I also think you have alternatives that are worth considering. > Consider e.g. how D documents its standard library: > http://dlang.org/phobos/ > > Each function/data structure in the library has a brief piece of documentation, > including examples, that is closely coupled to the source code (in fact it is > written in the source code next to the code it documents, and the documentation > is automatically collated from these doc entries in a manner similar to Doxygen). > > This is nice and easy to maintain because it is trivial to make sure the small > amount of documentation associated with a given function is up to date with the > function implementation itself. > > It's imperfect in another way, because it gives you a only case-by-case > documentation that is not so well integrated as (say) a book. But it also means > you have a fundamental resource to point people towards which is fairly well > guaranteed to be accurate. > > That in turn gives you a certain amount of freedom with respect to other more > "user-friendly" documentation such as an in-depth manual or tutorials, because > there's slightly less urgency behind their being 100% up to date and accurate -- > they can be _a_ reference, not THE reference Good to think about that. It was the explicit intention to have more than a reference manual for OLLib. The manual should also be a learning resource goint into some detail about the underlying concepts. But of course it is a valid point that this raises the step for someone to contribute, say, a single engraver. And of course your considerations about the accuracy of the docs is a valid point. One more item to discuss ... > >> Hm, I don't like having that stuff in separate branches. Of course >> commits will probably never interfere because we have independent >> directories. But that necessarily means that whenever I check out a >> branch to work on one book, the others are removed from the working tree >> and I can't have a look at them. > Treat them as separate Git projects which can be submodules of a "master" > project, not as different branches within the same repo. > > That way, you can keep as many or as few of them checked out as you want (in > different subdirs), but their version history will be sufficiently independent > that if I want to contribute to just one of them, I can pull that alone and not > have to branch the whole collection. > >> That may be a valid point, but I think we shouldn't have any >> compromising material in it from the start. > Agree, but I think that we have to consider that this is an accident that is > waiting to happen, however careful we are. Right :( > >> I can't really judge that due to lack of experience. I'm constantly >> struggling with keeping all my repos up to date (as I work on different >> computers I usually have to go through that procedure at lease four >> times a day ...), which is even more tedious if I rebase my 'private' >> branches and have to take great care to incorporate them correctly in >> the other local repo(s). > Use rebase with great, great care ;-) Of course, I know. But as I have to push often and at times that are imposed on me by the clock and not by the state of my work (each time I leave a computer), I have lots of Work-in-progress commits in my feature branches that I can safely rebase. It's just that one has to be careful about them. And of course I wouldn't rebase a master branch ;-) > > I have to say I have never had that great a problem keeping my repos clean and > up to date, although I have the bonus of using only a single machine most of the > time. It helps to be very strict with yourself about using feature branches and > maintaining "master" as a clone of upstream. (Or alternatively, maintaining an > "upstream" branch to same effect.) > > Perhaps one of these days we could have a phone/Skype/other VoIP call to discuss > good git practices, and see if we can resolve some of these issues and concerns > together? Maybe a good idea. But perhaps I'd rather collect a set of topics to be discussed, invite a few more people to join the list (well, actually I can subscribe anybody ;-)) and try to sort out things this way with some more input of more people. > >> There is _no_ status yet for both because I have no idea about the >> proceedings. I planned to look into this as soon as there are regular >> file releases for both packages. >> The 0.2 release of 'musicexamples' isn't available ATM, and I will have >> to change that release once more before making it available again, >> because I have changed the copyright notice (in the hope there won't be >> somebody who downloaded it in the few days and will make a problem out >> of it.) >> Probably that would be a candidate for a separate thread (if it still >> has to be discussed), but I think we should license all source code >> (i.e. LilyPond sources and LaTeX packages) with the GPL and the >> 'creative' parts (documentation and tutorials) with CC BY-SA. > Licensing needs extremely careful thought, because the bottom line of it is > this: you want to be absolutely, absolutely certain of what derivative work you > are staking a copyleft claim for. I imagine we both agree that it's important > that software and documentation deriving from your work should continue to be > free-as-in-freedom, and equally that music written using your tools should be > licensable as the composer/publisher/engraver wishes (and that would include .ly > files as well as the graphical score produced). Yes, definitely. > As things stand I have some > severe doubts that raw GPL/CC-BY-SA would satisfy this requirement. The CC-BY-SA part shouldn't be the problem, isn't it? > > One way to bypass that issue might be to use a permissive licence for code > and/or documentation, such as the Boost or Apache licenses (for code) and CC-BY > for docs. It carries a certain risk, but you have to ask how seriously likely > it is that someone would produce a proprietary derivative work in the first > place, and how much damage it would actually do if they did. > > This MUST have been discussed with respect to Lilypond \include calls, so let's > examine the past mailing list discussions and also raise the issue on -devel. > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user |
From: Urs L. <ul...@op...> - 2013-03-28 17:01:19
|
Am 28.03.2013 15:27, schrieb Joseph Rushton Wakeling: > On 03/28/2013 02:23 PM, Urs Liska wrote: >> I'm somewhat reluctant to this overhead to keep everything in sync. >> But I have to admit I don't have any practical experience with that. >> And I haven't found a way so far (for myself) to cope with the growing >> number of repos that have to be kept in sync every day. > git submodule update [--init] doesn't work? I didn't find this in http://git-scm.com/book/en/Git-Tools-Submodules. The man page for git-submodule states that command, but I have to admit that I don't fully understand it. But that man page also says: "Submodules allow foreign repositories to be embedded within a dedicated subdirectory of the source tree". That's what I don't really understand in our context: Both documents suggest to use submodules to integrate existing remote secondary repositories in the main one. Why this overhead of having a remote repository and its submodule 'mirror'? And the caveats I read about in the progit book don't look reassuring either. > > What are the particular sychronization problems that you see arising? Not really problems, its rather the complexity of the amount of repos. An example: If I work on, say, the manual for 'musicexamples' I may run into updating the stylesheet package. If everything is in one repository I will surely notice the need for committing this change, at least if I want to power down the computer and see that git status isn't clean. If the files were in separate repositories I would have to take care myself that I don't forget to update also the repo with the stylesheet in. Or: Each time I have to change a computer I have to go through all the repos to check if there are changes to be fetched and merged (or rebased). That's quite tedious and requires a fair amount of concentration in order not to omit a branch, for example. > >> The dependencies between the parts is there only for contributors, but: >> Everybody who wants to contribute is responsible to contribute the >> relevant documentation for the contribution. And to compile the manuals >> one needs everything: The basic LaTeX stuff, musicexamples, and >> lilyglyphs. So if I want to contribute, I'd have to clone the repos, >> have to take care of not breaking anything with the remotes of the >> submodules. Maybe I'm wrong with that, but it somehow looks scary to me. >> Users who only want to _use_ one or more of the parts will get archived >> 'releases' anyway. > Let's put it this way: "breaking" dependencies should be only one way, i.e. if > you change the toolbox or lilyglyphs then perhaps the manuals won't work, but > changing the manuals shouldn't _break_ those upstreams. (They might contain > incorrect or out-of-date information, but that's a slightly different problem.) That's right (and also the case currently) > > Now, in turn, it ought to be possible to organize and plan things so that > breaking changes are rare. This is effectively a policy issue. > > To put this in context, KDE is a huge project with uncountable > inter-dependencies among projects, and which originally was tracked in one > super-huge SVN repo. But on switching to git, they moved to the > one-submodule-per-package model. > > I think you need to think of it like this: users who only want to _use_ one or > more of the parts may operate off archived releases, but what about users who > only want to _work_ on one of the parts? It's really not clear to me why > someone who wants to work solely on lilyglyphs (say) should have to pull in > OLLib or the manuals. Maybe you're right. But: As it is now I have to tell the contributor: "clone the repository and make sure that LaTeX finds this folder and LilyPond finds this folder. You may have stuff there that you don't need, but you get everything you need with one clone." If there were smaller chunks I'd have to say: "You may clone the module you want to work on, but you will have to download and install the musicexamples and lilyglyphs packages in order to compile the docs. > Likewise it's not clear to me why someone who contributes > something to the toolbox should have to contribute documentation beyond the most > basic description (which is why I suggested drawing a line between the basic > toolbox documentation and more in-depth tutorials and manuals). I will have to think about that, but I'll also comment on your other email about that. > It just seems to me that, looking at things, your sense of interdependency > between the parts is a social rather than a technical requirement, and that > tweaking those social requirements could still get you where you want to go, > just making it much technically easier. > > LP contribution is in itself unnecessarily complicated because the requirements > for doc and code contributions overlap in a way that they shouldn't have to, and > I don't think you should make the same mistakes. > > For example, one of the benefits of DVCS is that it means that I can work on my > little submodule by myself, do what I want to, and only worry about breakages > when I'm submitting for merge -- when it gets run through the test suite. > > _That's_ how you keep things in sync, by automated testing prior to accepting a > pull request, rather than by forcing everyone to have the whole massive archive > on their machine and keep it in sync manually (which apart from a few > hyper-virtuous individuals won't happen in any case). > > I think GitHub has some nice tools for integrating pull requests with automated > testing, and this could be worth looking into. OK, but I'll have to come back later to this, as it seems somewhat too much right now ;-) Best Urs > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user |
From: Joseph R. W. <jos...@we...> - 2013-03-28 16:08:00
|
On 03/28/2013 02:56 PM, Urs Liska wrote: > To be clear (maybe I suspect some misunderstanding here): OLLib is a > 'pure' LilyPond library. The user will write \include "OLLib.ily" (after > making the path accessible) and then can access all the function and > commands we provide in there. Just as a small note, I'd recommend only allowing lowercase directory and file names. This reflects typical practice for development and libraries across just about every programming and/or markup language I've ever dealt with and also helps to avoid potential issues with different case sensitivity rules across OS's. > In addition he may include units from the > OLLincludes subdirectory (e.g. engravers) or he may inspect and use > material from the OLLexamples and OLLtemplates directories. I recommend trying to modularize as much as possible so that users can \include the minimally necessary parts of OLLib. Boost is a good example here. > Functions we implement here _may_ be added in the future to LilyPond, if > we and the LilyPond developers consider them general and important (and > stable) enough. In the past we developed the \shape function in our > library, before David N managed to get it in the main code base. I think it would be good to have a focus on review, revision and standardization with the aim of being a more flexible test-ground for LP proper. Of course there need be no promises, some stuff may always remain outside. FWIW I think your chosen pianoStaff example could probably be revised and generalized sufficiently to be useful in LP proper, it's only a matter of time and thought. One possible way would be to replace SUp and SDown meaning, use the staff named "up" or "down" with, say, \staffUp and \staffDown meaning, switch to the staff that is above/below the current one. Then you need make no assumptions about staff naming. Looking into that code, I realized there is another issue. Your stated licence is GPL. How is that meant to interact with the case where e.g. I write an original piece of music and use OLLib's toolbox in my Lilypond source file? I am reasonably sure that if I release only the resulting PDF, there would not be an issue, but if I want to distribute the .ly source file(s) then almost certainly GPL requirements would kick in and would therefore force me to give a GPL licence to my piece of music. This seems to me to be unacceptable overreach on the part of the licensing. It may be worth raising this on the LP lists as it probably also applies to LP \include's. > I'm not really sure if I understand you correctly. Maybe it's more to > the point to ask about the relation between OLLib documentation and > tutorials? Well, I think it's worth mapping out very clearly what kinds of documentation are envisioned and how they interrelate -- and modularizing appropriately. For example, tutorials should probably not be bundled with library documentation. > There will be one 'book' documenting OLLib. This is traditional > documentation. > And there will be tutorials that are made accessible on the web site. > They aren't conceptually related to OLLib, although they may of course > use it and may dwell on certain aspects of it. > > Is that clearer? It's clearer but I also think you have alternatives that are worth considering. Consider e.g. how D documents its standard library: http://dlang.org/phobos/ Each function/data structure in the library has a brief piece of documentation, including examples, that is closely coupled to the source code (in fact it is written in the source code next to the code it documents, and the documentation is automatically collated from these doc entries in a manner similar to Doxygen). This is nice and easy to maintain because it is trivial to make sure the small amount of documentation associated with a given function is up to date with the function implementation itself. It's imperfect in another way, because it gives you a only case-by-case documentation that is not so well integrated as (say) a book. But it also means you have a fundamental resource to point people towards which is fairly well guaranteed to be accurate. That in turn gives you a certain amount of freedom with respect to other more "user-friendly" documentation such as an in-depth manual or tutorials, because there's slightly less urgency behind their being 100% up to date and accurate -- they can be _a_ reference, not THE reference > Hm, I don't like having that stuff in separate branches. Of course > commits will probably never interfere because we have independent > directories. But that necessarily means that whenever I check out a > branch to work on one book, the others are removed from the working tree > and I can't have a look at them. Treat them as separate Git projects which can be submodules of a "master" project, not as different branches within the same repo. That way, you can keep as many or as few of them checked out as you want (in different subdirs), but their version history will be sufficiently independent that if I want to contribute to just one of them, I can pull that alone and not have to branch the whole collection. > That may be a valid point, but I think we shouldn't have any > compromising material in it from the start. Agree, but I think that we have to consider that this is an accident that is waiting to happen, however careful we are. > I can't really judge that due to lack of experience. I'm constantly > struggling with keeping all my repos up to date (as I work on different > computers I usually have to go through that procedure at lease four > times a day ...), which is even more tedious if I rebase my 'private' > branches and have to take great care to incorporate them correctly in > the other local repo(s). Use rebase with great, great care ;-) I have to say I have never had that great a problem keeping my repos clean and up to date, although I have the bonus of using only a single machine most of the time. It helps to be very strict with yourself about using feature branches and maintaining "master" as a clone of upstream. (Or alternatively, maintaining an "upstream" branch to same effect.) Perhaps one of these days we could have a phone/Skype/other VoIP call to discuss good git practices, and see if we can resolve some of these issues and concerns together? > There is _no_ status yet for both because I have no idea about the > proceedings. I planned to look into this as soon as there are regular > file releases for both packages. > The 0.2 release of 'musicexamples' isn't available ATM, and I will have > to change that release once more before making it available again, > because I have changed the copyright notice (in the hope there won't be > somebody who downloaded it in the few days and will make a problem out > of it.) > Probably that would be a candidate for a separate thread (if it still > has to be discussed), but I think we should license all source code > (i.e. LilyPond sources and LaTeX packages) with the GPL and the > 'creative' parts (documentation and tutorials) with CC BY-SA. Licensing needs extremely careful thought, because the bottom line of it is this: you want to be absolutely, absolutely certain of what derivative work you are staking a copyleft claim for. I imagine we both agree that it's important that software and documentation deriving from your work should continue to be free-as-in-freedom, and equally that music written using your tools should be licensable as the composer/publisher/engraver wishes (and that would include .ly files as well as the graphical score produced). As things stand I have some severe doubts that raw GPL/CC-BY-SA would satisfy this requirement. One way to bypass that issue might be to use a permissive licence for code and/or documentation, such as the Boost or Apache licenses (for code) and CC-BY for docs. It carries a certain risk, but you have to ask how seriously likely it is that someone would produce a proprietary derivative work in the first place, and how much damage it would actually do if they did. This MUST have been discussed with respect to Lilypond \include calls, so let's examine the past mailing list discussions and also raise the issue on -devel. |
From: Joseph R. W. <jos...@we...> - 2013-03-28 14:27:48
|
On 03/28/2013 02:23 PM, Urs Liska wrote: > I'm somewhat reluctant to this overhead to keep everything in sync. > But I have to admit I don't have any practical experience with that. > And I haven't found a way so far (for myself) to cope with the growing > number of repos that have to be kept in sync every day. git submodule update [--init] doesn't work? What are the particular sychronization problems that you see arising? > The dependencies between the parts is there only for contributors, but: > Everybody who wants to contribute is responsible to contribute the > relevant documentation for the contribution. And to compile the manuals > one needs everything: The basic LaTeX stuff, musicexamples, and > lilyglyphs. So if I want to contribute, I'd have to clone the repos, > have to take care of not breaking anything with the remotes of the > submodules. Maybe I'm wrong with that, but it somehow looks scary to me. > Users who only want to _use_ one or more of the parts will get archived > 'releases' anyway. Let's put it this way: "breaking" dependencies should be only one way, i.e. if you change the toolbox or lilyglyphs then perhaps the manuals won't work, but changing the manuals shouldn't _break_ those upstreams. (They might contain incorrect or out-of-date information, but that's a slightly different problem.) Now, in turn, it ought to be possible to organize and plan things so that breaking changes are rare. This is effectively a policy issue. To put this in context, KDE is a huge project with uncountable inter-dependencies among projects, and which originally was tracked in one super-huge SVN repo. But on switching to git, they moved to the one-submodule-per-package model. I think you need to think of it like this: users who only want to _use_ one or more of the parts may operate off archived releases, but what about users who only want to _work_ on one of the parts? It's really not clear to me why someone who wants to work solely on lilyglyphs (say) should have to pull in OLLib or the manuals. Likewise it's not clear to me why someone who contributes something to the toolbox should have to contribute documentation beyond the most basic description (which is why I suggested drawing a line between the basic toolbox documentation and more in-depth tutorials and manuals). It just seems to me that, looking at things, your sense of interdependency between the parts is a social rather than a technical requirement, and that tweaking those social requirements could still get you where you want to go, just making it much technically easier. LP contribution is in itself unnecessarily complicated because the requirements for doc and code contributions overlap in a way that they shouldn't have to, and I don't think you should make the same mistakes. For example, one of the benefits of DVCS is that it means that I can work on my little submodule by myself, do what I want to, and only worry about breakages when I'm submitting for merge -- when it gets run through the test suite. _That's_ how you keep things in sync, by automated testing prior to accepting a pull request, rather than by forcing everyone to have the whole massive archive on their machine and keep it in sync manually (which apart from a few hyper-virtuous individuals won't happen in any case). I think GitHub has some nice tools for integrating pull requests with automated testing, and this could be worth looking into. |
From: Urs L. <ul...@op...> - 2013-03-28 13:56:39
|
Hi Joe, thanks for that detailed feedback. Am 28.03.2013 13:49, schrieb Joseph Rushton Wakeling: > On 03/18/2013 01:06 PM, Urs Liska wrote: >> I have completed a first working version of our site www.openlilylib.org >> and would like to ask you for feedback before I proceed. > These are still somewhat superficial impressions as I have not had a lot of > brain time free due to work :-( But here goes: > > -- I would like to see a better clarification of the relationship between > OLLib and Lilypond proper. In particular where usability/technical > tools are provided, why aren't they being pushed directly into LP proper? > Will they be in future? How is the development relationship seen for the > future, e.g. will OLLib be for LP something like what Boost is for C++, a > testing and review ground where concepts and tools can be refined before > being standardized? To be clear (maybe I suspect some misunderstanding here): OLLib is a 'pure' LilyPond library. The user will write \include "OLLib.ily" (after making the path accessible) and then can access all the function and commands we provide in there. In addition he may include units from the OLLincludes subdirectory (e.g. engravers) or he may inspect and use material from the OLLexamples and OLLtemplates directories. Functions we implement here _may_ be added in the future to LilyPond, if we and the LilyPond developers consider them general and important (and stable) enough. In the past we developed the \shape function in our library, before David N managed to get it in the main code base. OTOH there are functions that just aren't general enough to be part of LilyPond. Take /OLLib/toolboxes/pianoToolbox/pianoStaff.ily for example (one of the few items already online). These shorthands are very useful for switching staves in piano music, but they rely on a certain naming convention for the staves ("up" and "down"). This naming convention is consistent with the score blocks in OLLincludes and _can_ of course be apllied without these includes. But I couldn't imagine having a command in LilyPond that only works if you name the staves in a specific way. So this kind of content is an optional enhancement that will never be part of LilyPond itself. > -- In respect of the above, how is the rationale different between the toolbox > and the documentation? I'm not really sure if I understand you correctly. Maybe it's more to the point to ask about the relation between OLLib documentation and tutorials? There will be one 'book' documenting OLLib. This is traditional documentation. And there will be tutorials that are made accessible on the web site. They aren't conceptually related to OLLib, although they may of course use it and may dwell on certain aspects of it. Is that clearer? > -- I think it's worth separating out the toolbox and documentation aspects of > OLLib into separate projects. Or rather: I would separate tools and their > immediate documentation, from larger-scale tutorials and manuals. Hm, I'd say that's the plan already? We have three 'tools': OLLib (plus ...includes and examples), musicexamples, lilyglyphs. Each one of them has its own individual manual. Beside that there are the collection of tutorials, and a Contributor's Guide. > Separate > "books" and tutorials should probably be separate Git branches, which > might be grouped together using submodules. Hm, I don't like having that stuff in separate branches. Of course commits will probably never interfere because we have independent directories. But that necessarily means that whenever I check out a branch to work on one book, the others are removed from the working tree and I can't have a look at them. > > Example reason: suppose you have some tutorials which contain in-copyright > material, and a downstream user wants to easily excise them. It's MUCH > easier if they can simply delete a pointer to a submodule, rather than > having to inherit a complete project history including the copyrighted > material. > > That could also be a protection for OLLib itself, because in the event of > a publisher dispute we could just remove the individual git repo of the > disputed material, rather than having to do messy rewrites of a large > amount of git history. That may be a valid point, but I think we shouldn't have any compromising material in it from the start. > > -- Bear in mind that separating as much as possible into separate repos or > branches is also good because many git commands scale with overall project > size, hence grouping projects into modules as small as possible helps > improve the developer experience and the ability of the project to expand. I can't really judge that due to lack of experience. I'm constantly struggling with keeping all my repos up to date (as I work on different computers I usually have to go through that procedure at lease four times a day ...), which is even more tedious if I rebase my 'private' branches and have to take great care to incorporate them correctly in the other local repo(s). > > -- What's the status of lilyglyphs on CTAN? Would be good to get it there > quickly and add a link accordingly. > > -- Ditto musicexamples. There is _no_ status yet for both because I have no idea about the proceedings. I planned to look into this as soon as there are regular file releases for both packages. The 0.2 release of 'musicexamples' isn't available ATM, and I will have to change that release once more before making it available again, because I have changed the copyright notice (in the hope there won't be somebody who downloaded it in the few days and will make a problem out of it.) Probably that would be a candidate for a separate thread (if it still has to be discussed), but I think we should license all source code (i.e. LilyPond sources and LaTeX packages) with the GPL and the 'creative' parts (documentation and tutorials) with CC BY-SA. Best Urs > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user |
From: Urs L. <ul...@op...> - 2013-03-28 13:24:10
|
Am 28.03.2013 13:19, schrieb Joseph Rushton Wakeling: > On 03/20/2013 04:50 PM, Janek Warchoł wrote: >>> If that would be possible, we could have one SourceForge project only, >>> which would be less confusing probably. >> I like tinkering with git and i think i could learn how to merge >> repositories. However, maybe git submodules would be a correct answer? A >> submodule is just a repository inside another repository, so that we could >> have a big repo containing all projects, while all of them would remain to >> be separate repos. >> I don't know how well that plays with SourceForge, however. > Sorry to come to this late in the day, but I don't see the point in having > one-repo-to-rule-them-all _unless_ done via submodules -- the separation into 3 > different projects is quite correct IMO, as each part is clearly independently > useful (or at least, such dependencies as exist are one-way only). Hm, I'm not really sure. If I'm not completely mistaken (and your linked examples doesn't indicate so) then a submodule is kind of a link to another git repository. So you have - a 'toplevel' repository - one or more accompanying repositories - one or more submodules in the main repository that have the accompanying repos as their remote(s) I'm somewhat reluctant to this overhead to keep everything in sync. But I have to admit I don't have any practical experience with that. And I haven't found a way so far (for myself) to cope with the growing number of repos that have to be kept in sync every day. The dependencies between the parts is there only for contributors, but: Everybody who wants to contribute is responsible to contribute the relevant documentation for the contribution. And to compile the manuals one needs everything: The basic LaTeX stuff, musicexamples, and lilyglyphs. So if I want to contribute, I'd have to clone the repos, have to take care of not breaking anything with the remotes of the submodules. Maybe I'm wrong with that, but it somehow looks scary to me. Users who only want to _use_ one or more of the parts will get archived 'releases' anyway. I'm commenting on the structure and relation of the parts in your other email. Best Urs > > If you want a nice example of submodules, you can see it here with LDC (a > compiler for the D programming language based on the LLVM backend). This > project has the language runtime, standard library (Phobos) and test suite each > as separate submodules. > https://github.com/ldc-developers/ldc > > Since OLL is moving back to GitHub, that shouldn't be a problem. > > ------------------------------------------------------------------------------ > Own the Future-Intel® Level Up Game Demo Contest 2013 > Rise to greatness in Intel's independent game demo contest. > Compete for recognition, cash, and the chance to get your game > on Steam. $5K grand prize plus 10 genre and skill prizes. > Submit your demo by 6/6/13. http://p.sf.net/sfu/intel_levelupd2d > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user |
From: Joseph R. W. <jos...@we...> - 2013-03-28 12:49:15
|
On 03/18/2013 01:06 PM, Urs Liska wrote: > I have completed a first working version of our site www.openlilylib.org > and would like to ask you for feedback before I proceed. These are still somewhat superficial impressions as I have not had a lot of brain time free due to work :-( But here goes: -- I would like to see a better clarification of the relationship between OLLib and Lilypond proper. In particular where usability/technical tools are provided, why aren't they being pushed directly into LP proper? Will they be in future? How is the development relationship seen for the future, e.g. will OLLib be for LP something like what Boost is for C++, a testing and review ground where concepts and tools can be refined before being standardized? -- In respect of the above, how is the rationale different between the toolbox and the documentation? -- I think it's worth separating out the toolbox and documentation aspects of OLLib into separate projects. Or rather: I would separate tools and their immediate documentation, from larger-scale tutorials and manuals. Separate "books" and tutorials should probably be separate Git branches, which might be grouped together using submodules. Example reason: suppose you have some tutorials which contain in-copyright material, and a downstream user wants to easily excise them. It's MUCH easier if they can simply delete a pointer to a submodule, rather than having to inherit a complete project history including the copyrighted material. That could also be a protection for OLLib itself, because in the event of a publisher dispute we could just remove the individual git repo of the disputed material, rather than having to do messy rewrites of a large amount of git history. -- Bear in mind that separating as much as possible into separate repos or branches is also good because many git commands scale with overall project size, hence grouping projects into modules as small as possible helps improve the developer experience and the ability of the project to expand. -- What's the status of lilyglyphs on CTAN? Would be good to get it there quickly and add a link accordingly. -- Ditto musicexamples. |
From: Joseph R. W. <jos...@we...> - 2013-03-28 12:19:42
|
On 03/20/2013 04:50 PM, Janek Warchoł wrote: >> If that would be possible, we could have one SourceForge project only, >> which would be less confusing probably. > > I like tinkering with git and i think i could learn how to merge > repositories. However, maybe git submodules would be a correct answer? A > submodule is just a repository inside another repository, so that we could > have a big repo containing all projects, while all of them would remain to > be separate repos. > I don't know how well that plays with SourceForge, however. Sorry to come to this late in the day, but I don't see the point in having one-repo-to-rule-them-all _unless_ done via submodules -- the separation into 3 different projects is quite correct IMO, as each part is clearly independently useful (or at least, such dependencies as exist are one-way only). If you want a nice example of submodules, you can see it here with LDC (a compiler for the D programming language based on the LLVM backend). This project has the language runtime, standard library (Phobos) and test suite each as separate submodules. https://github.com/ldc-developers/ldc Since OLL is moving back to GitHub, that shouldn't be a problem. |
From: Urs L. <ul...@op...> - 2013-03-28 11:46:02
|
Hi, after my first experiences with SourceForge I have to say I'm quite disappointed and will soon revert the move to that platform: - The web site is rather clumsy and unresponsive - The product is quite buggy and they are struggling with permission issues: - I couldn't delete a subproject, and a staff member with higher permission had to do that - Updating things often isn't propagated to the site (for example reordering the menu or removing entries from it often didn't work) - Project descriptions and/or feature list weren't displayed on the project main page - Code hosting is by far inferior to other providers - The commit browser isn't really interesting - especially compared to GitHub's network graph - Pull requests are possible (although still undocumented!) but very awkward: You can't process them online but have to pull them first before you can see their content. One of my two trials didn't even work at all because Git refused to merge (not because of merge conflicts but because what I got to fetch was "nothing we can merge") ... That last point was the show-stopper for me. Pull requests are really indispensable for a project like this: I want as many contributors as possible, but of course we can't give everybody push access. So we'll have to change our options once more. It seems we won't get a 'one-stop-shop' now, but OK, who has it? (LilyPond doesn't, for example) I will make a few announcements and ask for voting on a few open decisions now: Code hosting ======== As the main provider for hosting I see GitHub and BitBucket. While their offerings are quite similar I decided to go for GitHub for the following reasons: - The web experience is even better with GitHub (esp. for the network graph) - you can actually edit files through the web interface (which is handy for README files, for example) - The issue tracker is slightly more configurable (although I don't like it too much) - You can actually edit pull requests online (although BitBucket's implementation is very good too) - See below for the project website issue Web site ======== Other than SourceForge, both competitors only offer static web sites to be served. But GitHub offers a tool called Jekyll which seems a good alternative. Jekyll is a Ruby script that takes a set of templates and content files (similar to stacey) and renders them into a static web site before pushing it to a web server. Which of course results in fast, reliable and secure web sites. Of course GitHub offers that with special features: The project web site resides in a dedicated Git repository, and whenever something is pushed to the master branch, Jekyll processes the content on the GitHub server, so from the point of view of a content author it actually behaves like a server-side CMS. This is far better than the handling on SourceForge, where one has to upload the web content separately. The only drawback I see so far is that Jekyll is _very_ blog-centric, so it may become somewhat awkward to talk it into creating a traditional web site with hierarchical navigation. The domain redirect will work as with SourceForge, i.e. the web site will be accessible through the domain. Issues ======= As mentioned I don't like the GitHub issue tracker that much, but I think we can live with that. Integration of Git pushes with issues is a nice thing SourceForge doesn't offer (although BitBucket is far more versatile in this respect). File downloads ======= GitHub has discontinued its download pages (because they want to concentrate on _code_), but one can always download a complete checked out branch as a zip file. I think we can do without SourceForge's extensive download area and just provide our downloads as part of the Web site. This means we'll have to include the binaries in the website repository, but maybe that's ok. Or do you think we should put binary release files somewhere else (e.g. on a ftp server)? Mailing list ======= That's the only thing we won't get from GitHub. I see two options and would like you to vote: a) keep the SourceForge project only for the mailing list. I can imagine having a project page on SourceForge that just links to our other resources could be good in order to be found more easily. Although it _does_ imply some overhead. b) Recreate the Google group (sorry, Jan-Peter ;-) ) Best Urs |
From: Urs L. <ul...@op...> - 2013-03-25 22:43:56
|
On Mon, 25 Mar 2013 17:59:04 +0100 Janek Warchoł <lem...@gm...> wrote: > 2013/3/23 Urs Liska <ul...@op...> > > > On the other hand, I'm quite tempted to redesign the history from > > scratch because it is really quite messy IMO (partly because of the fact > > that it's a merged history (there are three 'initial commits' for > > example) and partly because of me only learning all this stuff. And if I > > did that it would probably reduce the number of commits through numerous > > squashes anyway. > > > > I know that temptation. > https://github.com/janek-warchol/eja-mater-demonstration is an example of > possible results. Was it worth it? Well, only if it'll be used for > analyzing LilyPond performance (another Report article maybe) and/or > demonstrating lily+git workflow to potential "employers". I have done that now for a feature branch that I boiled down from 9 unordered to 5 tidy commits. I think that's worth it, to tidy up a feature branch before merging into master (or 'develop'), but not for a whole repository. > ... > > These two also return 404 errors, so i assume i'm too late. Sorry :( No problem, I think I'm fine with this issue :-) > Janek |
From: Janek W. <lem...@gm...> - 2013-03-25 17:07:28
|
Hi, catching up :) 2013/3/23 Urs Liska <ul...@op...> > I applied a strategy that was described here > <https://sandofsky.com/blog/git-workflow.html> quite spot on as > "declaring branch bankrupcy". Through 'git commit --squash' I created > one big commit with the difference of the initial empty state to the > current working tree. This is rather clean because it just skips all > deviations, dead ends etc. and records the current working tree as if it > were imported from another location by file operations. > I think this is a good choice. > Conceptually this purges _any_ history, but I did something I found > clever: I created a tag at the HEAD of the old master branch before > deleting the old branches. Now all history is still there - in all its > messiness - but out of sight in the (branch-free) tag archive/oldmaster. > So it doesn't disturb the history graph anymore but is readily available > for inspection through 'git checkout archive/oldmaster'. very good idea! I removed the temporary repositories from SourceForge and pushed my > updated repo to the original place > https://sourceforge.net/p/openlilylib/code/ > In the root dir you'll find a file README-branching.md with my thoughts > on a branching strategy. Please have a look at it and tell me if that's > realistic. > LGTM. best, Janek -------------- next part -------------- An HTML attachment was scrubbed... |
From: Janek W. <lem...@gm...> - 2013-03-25 16:59:34
|
2013/3/23 Urs Liska <ul...@op...> > On the other hand, I'm quite tempted to redesign the history from > scratch because it is really quite messy IMO (partly because of the fact > that it's a merged history (there are three 'initial commits' for > example) and partly because of me only learning all this stuff. And if I > did that it would probably reduce the number of commits through numerous > squashes anyway. > I know that temptation. https://github.com/janek-warchol/eja-mater-demonstration is an example of possible results. Was it worth it? Well, only if it'll be used for analyzing LilyPond performance (another Report article maybe) and/or demonstrating lily+git workflow to potential "employers". > What I would like to have (after reading several documents with > work-flow suggestions) would be: > - a master branch which only has 'stable' commits, e.g. one commit for > each tag, > - a develop branch which also only has stable commits, but more, i.e. > one commit for each completed feature branch, sth like > > master > develop > * release OLLib v0.2 > |\ > | * oll: add \displayControlPoints > | * oll: add someFeature > * | release Tutorial Gervasoni > |\ > | * tut: finish tutorial Gervasoni > | * xmp: fix #11 - odd/even-sided examples > | * tut: > looks nice. > ########### > One day later: > ########### > > Yesterday I started such a redesign (after making a copy with git clone > of course - I have put quite some effort in the existing state, and I > won't start the merge from the beginning again ...). > And it seems to work basically, but (and that's a big but) it seems to > mean that I have to effectively review each single commit, and the > perspective is quite discouraging. > If you want to have a look at the current state, you may go to > http://sourceforge.net/p/openlilylib/rebase-repo/ and browse commits > there or clone into it. > I get a 404 error "We're sorry but we weren't able to process this request" > I _could_ proceed that way, but I have the impression this would take me > about a week, and I'm not sure if that's worth it. Although I assume > (hope) this repo will live for years ... > I don't know if it's not too late for such advice, but i'd advise not to do this. I think that most projects have a messy history at the beginning, and anyway if everything will be ok we hope that browsing very old history won't be necessary. > > Maybe a few final questions that can be answered with a plain opinion > statement: > > * If you look at the history in > http://sourceforge.net/p/openlilylib/rebase-repo/commit_browser (the > upper part down to the first "initial commit"), would you think > that's good design? Or could I go this way but squash the commits in > 'develop' more radically, i.e. leaving only commits like 'add > commands', 'update manual' or 'fix several bugs' > * Or if you look at the history in > http://sourceforge.net/p/openlilylib/mergerepos/commit_browser (the > state after merging the repos: Would it be acceptable (i.e. the > better way because it is considerably easier) to leave that history > as it is (towards the end it is quite linear actually) and only > start to adhere to the strategy described above from now on. I.e. > consider the history up to now as the somewhat messy and voluminous > 'initial state'? > These two also return 404 errors, so i assume i'm too late. Sorry :( Janek -------------- next part -------------- An HTML attachment was scrubbed... |
From: Urs L. <ul...@op...> - 2013-03-23 15:58:26
|
Hi, sorry for the amount of noise lately. But it seems I need writing emails to get the idea afterwards myself ;-) I think I found the solution to make the mess manageable without spending dozens of hours of work. I applied a strategy that was described here <https://sandofsky.com/blog/git-workflow.html> quite spot on as "declaring branch bankrupcy". Through 'git commit --squash' I created one big commit with the difference of the initial empty state to the current working tree. This is rather clean because it just skips all deviations, dead ends etc. and records the current working tree as if it were imported from another location by file operations. Conceptually this purges _any_ history, but I did something I found clever: I created a tag at the HEAD of the old master branch before deleting the old branches. Now all history is still there - in all its messiness - but out of sight in the (branch-free) tag archive/oldmaster. So it doesn't disturb the history graph anymore but is readily available for inspection through 'git checkout archive/oldmaster'. I removed the temporary repositories from SourceForge and pushed my updated repo to the original place https://sourceforge.net/p/openlilylib/code/ In the root dir you'll find a file README-branching.md with my thoughts on a branching strategy. Please have a look at it and tell me if that's realistic. Best Urs -------------- next part -------------- An HTML attachment was scrubbed... |
From: Urs L. <ul...@op...> - 2013-03-23 08:56:08
|
Am 22.03.2013 11:23, schrieb Urs Liska: > ... >> AFAICS it is a working repository, although history isn't all too >> nice (e.g. in such a combined repo I would prefix each commit >> message with a label (like oll, xmp) which I obviously haven't >> done in the past). >> >> >> You should be able to do this now using git filter-branch (of course >> this will change history, but maybe it's worth it). Some info for >> example here: >> http://mm0hai.net/blog/2011/03/10/rewriting-git-commit-message-history.html > I'll look into that (and read the "rewriting history" chapter of the > main book). History rewriting is surely not an issue in this case, as I > think I can assume the repository sort of private, I can't imagine > anybody actually has based some work on it yet). > And if I can manage to make the history look cleaner (as a kind of > initial state) I'd appreciate it - but only if that doesn't mean I have > to inspect the 536 commits individually ;-) Hm, having read the link you provided, "Rewriting History" from ProGit and the manual entry for git-filter-branch I still don't see at all how I could proceed to clean up the history. Any filter-branch approach would only make sense if I could somehow reword the commit message automatically - and I don't see how that should be possible. What probably would be necessary would be a complete interactive rebase - which isn't a nice perspective with that number of commits. On the other hand, I'm quite tempted to redesign the history from scratch because it is really quite messy IMO (partly because of the fact that it's a merged history (there are three 'initial commits' for example) and partly because of me only learning all this stuff. And if I did that it would probably reduce the number of commits through numerous squashes anyway. What I would like to have (after reading several documents with work-flow suggestions) would be: - a master branch which only has 'stable' commits, e.g. one commit for each tag, - a develop branch which also only has stable commits, but more, i.e. one commit for each completed feature branch, sth like master develop * release OLLib v0.2 |\ | * oll: add \displayControlPoints | * oll: add someFeature * | release Tutorial Gervasoni |\ | * tut: finish tutorial Gervasoni | * xmp: fix #11 - odd/even-sided examples | * tut: ########### One day later: ########### Yesterday I started such a redesign (after making a copy with git clone of course - I have put quite some effort in the existing state, and I won't start the merge from the beginning again ...). And it seems to work basically, but (and that's a big but) it seems to mean that I have to effectively review each single commit, and the perspective is quite discouraging. If you want to have a look at the current state, you may go to http://sourceforge.net/p/openlilylib/rebase-repo/ and browse commits there or clone into it. Branches master and develop are the ones I have created from scratch, oldmaster is what the name suggests ;-) and the other branches are temporary ones that I use(d) to rip partial branches off oldmaster. The relation of master and develop is how I intended it (although the SourceForge graph displays the chronological relation differently), but develop isn't as clean as I would have liked it. Probably it is possible to make that better, but either with unreasonable effort or by loosing too much detail in the history. Basically develop contains way too many intermediate commits that should have been made in feature branches and squashed before merging back to develop. I _could_ proceed that way, but I have the impression this would take me about a week, and I'm not sure if that's worth it. Although I assume (hope) this repo will live for years ... Beside the problem of sheer amount of time I have more disturbing problems when I encounter rebase conflicts, because I actually don't really know if I make damages when resolving them manually - because I now don't know anymore which of the conflicting versions is 'better' and especially the one leading to the final state. I don't want to think about the option that I introduce changes that pass through all later merges but cause the final result to be different from what it should be ... To make that perfectly clear: I DON'T EXPECT ANYBODY TO GO INTO THAT IN DETAIL because I know this might take some time. I'd be happy about any assistance, but I'm ready to find my way through it on my own. Maybe a few final questions that can be answered with a plain opinion statement: * If you look at the history in http://sourceforge.net/p/openlilylib/rebase-repo/commit_browser (the upper part down to the first "initial commit"), would you think that's good design? Or could I go this way but squash the commits in 'develop' more radically, i.e. leaving only commits like 'add commands', 'update manual' or 'fix several bugs' * Or if you look at the history in http://sourceforge.net/p/openlilylib/mergerepos/commit_browser (the state after merging the repos: Would it be acceptable (i.e. the better way because it is considerably easier) to leave that history as it is (towards the end it is quite linear actually) and only start to adhere to the strategy described above from now on. I.e. consider the history up to now as the somewhat messy and voluminous 'initial state'? Best Urs > Best > Urs >> best, >> janek > -------------- next part -------------- > An HTML attachment was scrubbed... > ------------------------------------------------------------------------------ > Everyone hates slow websites. So do we. > Make your web apps faster with AppDynamics > Download AppDynamics Lite for free today: > http://p.sf.net/sfu/appdyn_d2d_mar > _______________________________________________ > openlilylib-user mailing list > ope...@li... > https://lists.sourceforge.net/lists/listinfo/openlilylib-user -------------- next part -------------- An HTML attachment was scrubbed... |
From: Urs L. <ul...@op...> - 2013-03-22 10:24:08
|
Am 21.03.2013 22:46, schrieb Janek Warchoł: > > > > 2013/3/21 Urs Liska <ul...@op... <mailto:ul...@op...>> > > ... > > But I think I have managed to set up a new, merged repository. > > > Congrats! > > It is here: https://sourceforge.net/p/openlilylib/mergerepos/ > and I have uploaded a tar.gz: > https://dl.dropbox.com/u/49478835/ollMerged.tar.gz > It would be nice if you (Janek) could have a glance at that and > its history, just to see if it looks reasonable to you). > > > I've looked and i don't see anything wrong. Looks like a clean job! Well, it wasn't really. I was quite irritated that checking out the lgFindstale branch let disappear quite a lot of folders (fortunately this doesn't actually shock me anymore because I trust that checking out another branch will bring me my folders back). After some thinking I realized the cause for that (well, actually I woke up with that thought ;-) ): I had first imported the lilyglyphs repo (to which lgFindstale belongs) and then the musicexamples repo. Therefore the lgFindstale branch didn't include the commits introducing the musicexamples files. So actually it was all solved with git checkout lgFindstale git rebase master Even without any rebase conflicts. Now it seems OK and nearly ready to replace the original repository. > > AFAICS it is a working repository, although history isn't all too > nice (e.g. in such a combined repo I would prefix each commit > message with a label (like oll, xmp) which I obviously haven't > done in the past). > > > You should be able to do this now using git filter-branch (of course > this will change history, but maybe it's worth it). Some info for > example here: > http://mm0hai.net/blog/2011/03/10/rewriting-git-commit-message-history.html I'll look into that (and read the "rewriting history" chapter of the main book). History rewriting is surely not an issue in this case, as I think I can assume the repository sort of private, I can't imagine anybody actually has based some work on it yet). And if I can manage to make the history look cleaner (as a kind of initial state) I'd appreciate it - but only if that doesn't mean I have to inspect the 536 commits individually ;-) Best Urs > > best, > janek -------------- next part -------------- An HTML attachment was scrubbed... |
From: Janek W. <lem...@gm...> - 2013-03-21 21:46:51
|
2013/3/21 Urs Liska <ul...@op...> > Maybe I succeeded already (although it was at the cost of some nerves and > an aching back ;-) ) > Submodules and subtree merge aren't intended for our purpose, but > http://saintgimp.org/2013/01/22/merging-two-git-repositories-into-one-repository-without-losing-file-history/helped. > > It was quite complex because it is difficult to move content around when > there is more than one branch. And I ran into trouble (probably) because > the history wasn't really straightforward (I had moved folders several > times and once factored a complete folder out into another repo (i.e. > deleted it from Git's perspective). > > But I think I have managed to set up a new, merged repository. > Congrats! > It is here: https://sourceforge.net/p/openlilylib/mergerepos/ > and I have uploaded a tar.gz: > https://dl.dropbox.com/u/49478835/ollMerged.tar.gz > It would be nice if you (Janek) could have a glance at that and its > history, just to see if it looks reasonable to you). > I've looked and i don't see anything wrong. Looks like a clean job! > AFAICS it is a working repository, although history isn't all too nice > (e.g. in such a combined repo I would prefix each commit message with a > label (like oll, xmp) which I obviously haven't done in the past). > You should be able to do this now using git filter-branch (of course this will change history, but maybe it's worth it). Some info for example here: http://mm0hai.net/blog/2011/03/10/rewriting-git-commit-message-history.html best, janek -------------- next part -------------- An HTML attachment was scrubbed... |
From: Urs L. <ul...@op...> - 2013-03-21 17:48:54
|
Am 21.03.2013 12:53, schrieb Janek Warchoł: > 2013/3/21 Urs Liska <ul...@op... <mailto:ul...@op...>> > > I have just read the submodules chapter in the progit book, and it > seems it isn't the best solution. > First, a submodule is a repository with a different remote (i.e. > pull in another repository to use and manage it within the main > repository). So we'd have to keep the other remote repositories > anyway (IIUC). Second, the book issues some warnings about having > to be careful not to break things when working in submodules. > > For my taste this rather looks like keeping the structure as it is > or managing to merge the repositories and maintain them as one. > > > You are probably quite right. I hope to try merging some repos, maybe > on Easter. > > best, > Janek Maybe I succeeded already (although it was at the cost of some nerves and an aching back ;-) ) Submodules and subtree merge aren't intended for our purpose, but http://saintgimp.org/2013/01/22/merging-two-git-repositories-into-one-repository-without-losing-file-history/ helped. It was quite complex because it is difficult to move content around when there is more than one branch. And I ran into trouble (probably) because the history wasn't really straightforward (I had moved folders several times and once factored a complete folder out into another repo (i.e. deleted it from Git's perspective). But I think I have managed to set up a new, merged repository. It is here: https://sourceforge.net/p/openlilylib/mergerepos/ and I have uploaded a tar.gz: https://dl.dropbox.com/u/49478835/ollMerged.tar.gz It would be nice if you (Janek) could have a glance at that and its history, just to see if it looks reasonable to you). AFAICS it is a working repository, although history isn't all too nice (e.g. in such a combined repo I would prefix each commit message with a label (like oll, xmp) which I obviously haven't done in the past). Now I'll have to bring the directory structure in line again, update some readme files and do clean-up stuff like that. But that should be a breeze now ... And finally I'll replace the existing repo so we'll only have one repo. Best Urs -------------- next part -------------- An HTML attachment was scrubbed... |
From: Janek W. <lem...@gm...> - 2013-03-21 11:53:55
|
2013/3/21 Urs Liska <ul...@op...> > I have just read the submodules chapter in the progit book, and it seems > it isn't the best solution. > First, a submodule is a repository with a different remote (i.e. pull in > another repository to use and manage it within the main repository). So > we'd have to keep the other remote repositories anyway (IIUC). Second, the > book issues some warnings about having to be careful not to break things > when working in submodules. > > For my taste this rather looks like keeping the structure as it is or > managing to merge the repositories and maintain them as one. > You are probably quite right. I hope to try merging some repos, maybe on Easter. best, Janek -------------- next part -------------- An HTML attachment was scrubbed... |
From: Urs L. <ul...@op...> - 2013-03-20 23:25:22
|
On Wed, 20 Mar 2013 17:11:53 +0100 Janek Warchoł <lem...@gm...> wrote: > 2013/3/20 Urs Liska <ul...@op...> > > > > I like tinkering with git and i think i could learn how to merge > > repositories. However, maybe git submodules would be a correct answer? A > > submodule is just a repository inside another repository, so that we could > > have a big repo containing all projects, while all of them would remain to > > be separate repos. > > I don't know how well that plays with SourceForge, however. > > > > Hm, for some reason I don't really know I'm 'scared' by Git submodules. > > The documentation on SourceForge doesn't mention submodules, but I don't > > know if there are special needs for that on the server. > > I think I will look in the official Git docs and get a better idea what > > submodules really are. > > > > I have a bit of experience with them and they seem to be pretty > straightforward. I was even able to debug a corrupted git repository which > contained a submodule, as you can see here: > http://stackoverflow.com/questions/14797978/git-recovery-object-file-is-empty-how-to-recreate-trees > (that was really awesome! I felt like a pro when i had finished ;-) ) :-) I have just read the submodules chapter in the progit book, and it seems it isn't the best solution. First, a submodule is a repository with a different remote (i.e. pull in another repository to use and manage it within the main repository). So we'd have to keep the other remote repositories anyway (IIUC). Second, the book issues some warnings about having to be careful not to break things when working in submodules. For my taste this rather looks like keeping the structure as it is or managing to merge the repositories and maintain them as one. Best Urs > > best, > Janek |
From: Janek W. <lem...@gm...> - 2013-03-20 16:12:21
|
2013/3/20 Urs Liska <ul...@op...> > the website content is in a dedicated directory within the openLilyLib > repository. (you can browse that in the "Code" area of openLilyLib, the > directory is 'project-web'. > AFAICS one can't use Git to manage the project web space (which is a pity > if it's true). > So the work-flow is: Make changes to the content (or the templates or the > css ...), commit and push them to the openLilyLib repo and upload them to > the web space with rsync. > It's conceptionally a little bit awkward but works quite smoothly. > ah, ok. > I like tinkering with git and i think i could learn how to merge > repositories. However, maybe git submodules would be a correct answer? A > submodule is just a repository inside another repository, so that we could > have a big repo containing all projects, while all of them would remain to > be separate repos. > I don't know how well that plays with SourceForge, however. > > Hm, for some reason I don't really know I'm 'scared' by Git submodules. > The documentation on SourceForge doesn't mention submodules, but I don't > know if there are special needs for that on the server. > I think I will look in the official Git docs and get a better idea what > submodules really are. > I have a bit of experience with them and they seem to be pretty straightforward. I was even able to debug a corrupted git repository which contained a submodule, as you can see here: http://stackoverflow.com/questions/14797978/git-recovery-object-file-is-empty-how-to-recreate-trees (that was really awesome! I felt like a pro when i had finished ;-) ) best, Janek -------------- next part -------------- An HTML attachment was scrubbed... |