From: M. D. <ztr...@ad...> - 2007-07-31 12:42:18
|
My attempt to condense a few related discussions and provide a solution. =46irst, automatically calculated notation properties must be local to a=20 particular type of staff (more specific than just TabStaff, since we need t= he=20 number of lines and tuning to determine HEIGHT_ON_STAFF.) There also need to be properties that the user can set (e.g. STEM_UP) for = a=20 particular staff type. If present, these override the corresponding local=20 properties. These properties outlast a particular view, so they are "global= ",=20 but there needs to be a way to tag them as applying only to a particular=20 staff type. Use a prefix to indicate staff type? Second, is Segment selection/viewing when the Segments overlap. It seems=20 what's desired is Heikki Junes suggestion that we have Track x, Segment y,= =20 where x and y are numbers. This corresponds to my want of: given a track,=20 what segments does it have? A way to achieve this would be to give Track a list of Segments. This doesn= 't=20 seem like an overly difficult thing to achieve. =46or dealing with multiple staff types in the tablature implementation, I'= ve=20 put a StaffType member in Track. I can't think of a reason not to. Now for notation layout. To deal with multiple staff types, there needs to = be=20 another layer. Things like NotePixmapFactory, and some of the layout code a= re=20 dependent upon staff type. (My naming scheme here gets yucky because of RG'= s=20 way of defining a "staff" as corresponding to a segment rather than to a=20 track.) I've listed some (highly abbreviated) classes: class NotationTrack { // I own these vector<NotationStaff*> m_staffs; // all belong to same Track NotePixmapFactory *m_npf; NotationTrackHLayout *m_hlayout; NotationTrackVLayout *m_vlayout; }; class NotationView { // I own these vector<NotationTrack*> m_notationTracks; NotationViewHLayout *m_hlayout; NotationViewVLayout *m_vlayout; }; When NotationView creates the staffs, it also creates the NotationTracks an= d=20 puts the appropriate staffs in them. The NotationTrack ctor is passed font= =20 name and size so it can create a NotePixmapFactory. class NotationTrackHLayout { scan(); scanChord(); positionChord(); NotationStaff *m_staff; BarDataList m_bdl; }; class NotationViewHLayout { scanTrack(NotationTrack *ntrk); scanStaff(NotationStaff *staff); reconcileLayout(); NotationView *m_view; }; =46rom m_view, NotationViewHLayout can get the appropriate NotationTrack fo= r=20 scanTrack() and scanStaff(), as well as for reconciling the layout. This doesn't solve everything, but it allows different staff types in the s= ame=20 view. |
From: M. D. <ztr...@ad...> - 2007-07-31 13:02:17
|
On Tuesday 31 July 2007 12:38, M. Donalies wrote: > class NotationTrack > { > // I own these > vector<NotationStaff*> m_staffs; // all belong to same Track > NotePixmapFactory *m_npf; > NotationTrackHLayout *m_hlayout; > NotationTrackVLayout *m_vlayout; > }; In my implementation, this is actually: class NotationTrack { addStaff(NotationStaff *staff); // parent view owns this vector<NotationStaff*> m_staffs; // all belong to same Track // I own these NotePixmapFactory *m_npf; NotationTrackHLayout *m_hlayout; NotationTrackVLayout *m_vlayout; TrackId m_trackId; }; NotationTrack::addStaff(NotationStaff *staff) { // if already in m_staffs return TrackId id = m_staffs->getSegment().getTrackId() if (id != m_trackId) return; m_staffs.push_back(staff); } |
From: Chris C. <ca...@al...> - 2007-07-31 18:47:31
|
On Tuesday 31 July 2007 13:38, M. Donalies wrote: > First, automatically calculated notation properties must be local to > a particular type of staff (more specific than just TabStaff, since > we need the number of lines and tuning to determine HEIGHT_ON_STAFF.) > > There also need to be properties that the user can set (e.g. > STEM_UP) for a particular staff type. If present, these override the > corresponding local properties. These properties outlast a particular > view, so they are "global", but there needs to be a way to tag them > as applying only to a particular staff type. Use a prefix to indicate > staff type? Yes, it sounds like we want a prefix (or other namespacing mechanism) encapsulating the type and "settings" that make up the staff. I don't think that quite describes it yet, though. Consider STEM_UP again for example; a user setting it for a note on a single normal staff might or might not expect the setting to persist when the note is next viewed on a grand staff. If voices are supported, then a note may appear in one staff with other voices and in another staff of exactly the same type without any other voices, and the proper STEM_UP value would be different in either case. Is there any way to deal with that? Also, although these properties are specific to the staff type in the sense that the user might want to set different values in different staff types, in normal use they're probably more likely to want to have a new staff type take its default values from those they set in any previous staff type. > Second, is Segment selection/viewing when the Segments overlap. It > seems what's desired is Heikki Junes suggestion that we have Track x, > Segment y, where x and y are numbers. Mmm, as you may have noticed I don't really agree that segment numbering is a good idea in that situation. You may want a numbering for the voices within a staff, but that isn't the same thing (they should be on different tracks anyway, if we supported voicing properly -- having to overlap segments on a single track is a grotesque hack that we should do away with as soon as we can). > For dealing with multiple staff types in the tablature > implementation, I've put a StaffType member in Track. I can't think > of a reason not to. Here are some: * It would be very reasonable to want to show the same segment in more than one sort of staff, maybe even at the same time. Copying it to another track would be a pretty lousy way to achieve that. * It would be very desirable for segments that share a staff not to have to always share a track (see above). If you have segments across multiple tracks on the same staff, it doesn't make sense to associate the staff type with the track. I strongly believe that the right container for staff-type information isn't the track or the segment, but something else that we don't yet have in Rosegarden which describes a relationship between tracks (or segment, but probably tracks) and staffs, which is the thing I was referring to as the "score" in the "Score layout" page on the wiki. Trying to shoehorn staff properties into either the segment or the track looks to me as if it will lead to some immediate fatal problems (I think both of the above two listed problems are fatal). I would strongly encourage defining this score mapping container (which doesn't have to be very complicated) first. If you don't get what I mean by this, or don't see the need, I'm happy to go on about it a bit more. Chris |
From: D. M. M. <mic...@ro...> - 2007-07-31 22:20:44
|
On Tuesday 31 July 2007, Chris Cannam wrote: > different tracks anyway, if we supported voicing properly -- having to > overlap segments on a single track is a grotesque hack that we should > do away with as soon as we can). Having to do voices by manually combining different segments on different tracks (ie. merge staffs with same name or whatever at LilyPond export time) is a grotesque hack. I think it would be vastly preferable to have a system that managed overlapping segments in such a fashion that the user never had to worry that there was a v1, v2, v3, v4 version of what looked, felt, and moved like just one segment. They shouldn't ever have to muck about with more than one segment, and they should never have to manage multiple tracks, or merge options, and all that ugly nonsense we have in place because it's something that works today, not because it isn't a steaming pile of crap. No, I think it would be far preferable to have some kind of setup where you want to create notes in voice 2 of this segment, so we create an invisible overlapping segment to which you are completely oblivious as the user. I'm not sure what that would look like object-wise. Expanding Segment so it can contain layers, or else some invisible manipulation behind the scenes to chain segments of the existing and current type together. Along the subject of multiple voices, one seriously sucky thing about the current hacky state of affairs is the way you have to put all the chords in one voice in order for them to come out as chords. What about two voices that move in different directions, but occasionally come together on a chord? Those parts are bitchy and confusing to enter as it stands now, and I find this situation comes up again and again. Anyway, I'm looking at this from userland again. I have no idea what the internals look like, or what a hideously complex thing I think would be best, but I do strongly opine that continuing to have to twiddle multiple voices across multiple tracks is the road to crap. I also agree that manipulating overlapping segments is totally evil as it exists today. > I strongly believe that the right container for staff-type information > isn't the track or the segment, but something else that we don't yet > have in Rosegarden which describes a relationship between tracks (or > segment, but probably tracks) and staffs, which is the thing I was > referring to as the "score" in the "Score layout" page on the wiki. Sounds like the right direction, but not quite sure what you're on about. -- D. Michael McIntyre |
From: M. D. <ztr...@ad...> - 2007-08-01 14:02:42
|
On Tuesday 31 July 2007 18:55, Chris Cannam wrote: > Yes, it sounds like we want a prefix (or other namespacing mechanism) > encapsulating the type and "settings" that make up the staff. > > I don't think that quite describes it yet, though. Consider STEM_UP > again for example; a user setting it for a note on a single normal > staff might or might not expect the setting to persist when the note is > next viewed on a grand staff. If voices are supported, then a note may > appear in one staff with other voices and in another staff of exactly > the same type without any other voices, and the proper STEM_UP value > would be different in either case. Is there any way to deal with that? Yuck. There's a lot of complexity here. I can see this becoming a page-long list of ifs whenever we want to use a notation property. I'm thinking of all the command classes that access some property. I wonder if there's some better way to organize this. Giving NotationProperties some static "do the right thing" methods for commonly used situations? > I strongly believe that the right container for staff-type information > isn't the track or the segment, but something else that we don't yet > have in Rosegarden which describes a relationship between tracks (or > segment, but probably tracks) and staffs, which is the thing I was > referring to as the "score" in the "Score layout" page on the wiki. > > Trying to shoehorn staff properties into either the segment or the track > looks to me as if it will lead to some immediate fatal problems (I > think both of the above two listed problems are fatal). I would > strongly encourage defining this score mapping container (which doesn't > have to be very complicated) first. > > If you don't get what I mean by this, or don't see the need, I'm happy > to go on about it a bit more. I think I see now. We have a nomenclature difference. Conceptually, what is a Track? I've been thinking of Track as a container of segments that all behave the same way (same instrument, same staff type, etc.) for viewing. Voices are "inside" Track. Your view seems to be the other way around. Track as a sort of synonym for voice, with our undefined container wrapping various tracks. Michael's take on voices seems to almost be as a property of a Segment (or Event). You seem to want each voice on a separate track in order to mix and match voices in various combinations in a notation view. Question 1: How useful is it to have voices as independent units to mix and match? I don't really see voices as independent units, but rather as a piece of a whole... (I want to say track or staff) ..Thingy. How Thingy is displayed depends upon what staff type it's given. If voice was a property of Event, then you could still display edit by voice. What you couldn't directly is say: Let's combine Track 5 (representing voice 5) with Track 8 (representing voice 8) and display. With a score layout mechanism, you could still achieve this by selecting some option on each track to say "only show me voice x on this track". Question 2, which is a more general restating of Question 1: What do we get out of 1 voice per Track (and implicitly 1 voice per Segment)? Question 3: If we have 1 voice per track, what do we call the Thingy that encapsulates a group of tracks for display as an atomic unit in a notation view. I like the idea of a voice property in Event. Dynamically allocated voice as a "property" of Segment is basically what we have now, which I don't like at all. A manually set voice property of Segment (Track x, Segment y) would be something I could work with. I see 1 voice per Track as a hassle both for user and programmer with little if any benefit. It doesn't impede anything that I'm aware of though. |
From: Chris C. <ca...@al...> - 2007-08-01 17:42:29
|
On Wednesday 01 August 2007 14:59, M. Donalies wrote: > On Tuesday 31 July 2007 18:55, Chris Cannam wrote: > > [...] a note may appear in one staff with other voices and in > > another staff of exactly the same type without any other voices, > > and the proper STEM_UP value would be different in either case. Is > > there any way to deal with that? > > Yuck. There's a lot of complexity here. I can see this becoming a > page-long list of ifs whenever we want to use a notation property. > I'm thinking of all the command classes that access some property. I > wonder if there's some better way to organize this. Giving > NotationProperties some static "do the right thing" methods for > commonly used situations? Possibly, yeah. Perhaps it's better to resolve the question of how voices should be supported first, then. Looks like we're getting back into the "big plan" stuff with a vengeance. It may be that some of these questions become academic, of course. For example, the STEM_UP dilemma would become less important when Rosegarden automatically did the right thing for stems in regions that had two voices on the same staff (top voice up, bottom voice down). > I think I see now. We have a nomenclature difference. Conceptually, > what is a Track? I've been thinking of Track as a container of > segments that all behave the same way (same instrument, same staff > type, etc.) for viewing. Voices are "inside" Track. Your view seems > to be the other way around. Track as a sort of synonym for voice, > with our undefined container wrapping various tracks. When I say Track, I'm thinking strictly of what you get on the Rosegarden main window, and that is defined by the "MIDI-sequencer- ness" of Rosegarden -- it's a set of segments that all play to the same MIDI device/channel target, with broadly the same properties (program, initial controllers etc). If we take the premise that separate voices on the same staff may want to use different MIDI programs or play through different devices, it follows that separate voices have to occupy separate tracks, or else editing them in a "MIDI sequencer" is going to be appallingly hard work (and I really don't think it would be a good idea to start messing about with the MIDI sequencer interface to make it work). But is that premise valid, or would it be reasonable to always insist that separate voices in the same staff play with identical MIDI channel and program? > You seem to want each voice on a separate track > in order to mix and match voices in various combinations in a > notation view. Right. Of course "my way" would introduce complications of its own, e.g. the one with stem direction changing between different views of the same segment. Still, I do think it's almost the case that voice support only really makes sense if it is also possible to do things like printing out individual voices on their own, or printing out a part score with one voice at the top in its own staff and all the rest in a smaller staff underneath. So: > Question 1: How useful is it to have voices as independent units to > mix and match? I think it's useful (examples above), but would appreciate input from other people who want to use voice support. > Question 2, which is a more general restating of Question 1: What do > we get out of 1 voice per Track (and implicitly 1 voice per Segment)? Besides the answers to question 1: Ability to play different voices (on the same staff) with different instrument sounds. Ability to record voices separately from multiple MIDI takes without confusing the hell out of yourself in overlapping segments or having to merge segments after recording. > Question 3: If we have 1 voice per track, what do we call the Thingy > that encapsulates a group of tracks for display as an atomic unit in > a notation view. Staff. Chris |
From: Heikki J. J. <hj...@gm...> - 2007-08-02 09:58:48
|
2007/8/1, Chris Cannam <ca...@al...>: > (clip) > > Question 3: If we have 1 voice per track, what do we call the Thingy > > that encapsulates a group of tracks for display as an atomic unit in > > a notation view. > > Staff. > > Chris > I agree with Chris. My original proposal was just a hack. There indeed is need for Staff class. Then the class references would be the following: Segment (refers to 1) Track Staff (refers to 1 or more) Segment StaffGroup (refers to 1 or more) Staff StaffView (refers to 1 or more) StaffGroup (and/or Staff) To allow nested groups, one should do: Segment (refers to 1) Track Staff (refers to 1 or more) Segment StaffGroup (refers to 1 or more) Staff or _StaffGroup_ StaffView (refers to 1 or more) StaffGroup (and/or Staff) best wishes -- Heikki |
From: D. M. M. <mic...@ro...> - 2007-08-01 22:06:11
|
On Wednesday 01 August 2007, Chris Cannam wrote: > about with the MIDI sequencer interface to make it work). But is that > premise valid, or would it be reasonable to always insist that separate > voices in the same staff play with identical MIDI channel and program? OK, I see where you're coming from on one voice one track. If I want to do a what do they call it SATB? score using Super Sampler Human Voice Pack 3000 XL, and I want the soprano part to play with a soprano, the tenor with a tenor, etc., then I could see where this might be a necessary way to get there. Also, if I wanted to do a fake MIDI rendering of the piece with the voices panned out, soprano far left, bass far right. Honestly, I think all of those concerns are probably an edge case. If I have a multi-voice part I really want to split out like that for playback purposes, then I can split it out into tracks that work independently of notation. That's clunky, but I don't think it will happen very often in the real world, because people who use the voices in this fashion are the least likely to be people who really value the MIDI performance as an end to itself. They'll be writing for real musicians to play off of paper scores. Why else would they care what the notation looks like? Besides, we already have established precedent cases that require the existence of one set of tracks/segments for notation, and one for playback. I forget when that's necessary. Probably for DS al Coda or something, as exemplified in one of the LilyPond directives demo files. So, considering all of the above, I'm still in favor of voices as a property of segments. (Not events.) I could see continuing to use plain ol' stock segments in some new way that manages chaining them together, or expanding Segment to be able to include multiple layers within itself. Whatever seems most prudent/easiest/most likely to actually happen. > Still, I do think it's almost the case that voice support only really > makes sense if it is also possible to do things like printing out > individual voices on their own, or printing out a part score with one > voice at the top in its own staff and all the rest in a smaller staff > underneath. Why? Who does that? When I think of parts with multiple voices, I think of: * actual human voice music * music for polyphonic instruments that has different lines weaving around each other on the same staff (piano, guitar, organ, or even duets or trios that are printed on the same staff) * music for French horns (written with I and III on one staff, II and IV on the other) The only case where one might want to split things out might be when duets, trios, etc. are written on one staff, but if I were composing a duet I intended to be read on one staff, I don't think I would ever want to split that part into its constituent halves. If that's the only think you're worried about for splittable multi-voice parts, then we should go do some homework and see what Sibelius/Finale/etc. do. I would bet you can't split voices out into separate staffs with those high dollar big boys, but I could be wrong. > the same staff) with different instrument sounds. Ability to record > voices separately from multiple MIDI takes without confusing the hell > out of yourself in overlapping segments or having to merge segments > after recording. Interesting point, this. I think real keyboard players would scorn the idea of having to record voices in separate takes at all. We've had lots of complaints about how we totally crap our pants in this situation, and just render some bunch of garbage with split and tied chords. We have a hack for trying to separate the right hand from the left after the fact, but absolutely no way to cope with voices. This is very lame, but I'm not proposing we actually do anything about this particular issue for my own sake. I have no vested interest in this wrinkle, since I can't play multiple simultaneous voices on a keyboard anyway. Not unless I do it by accident, which is the case extremely often. -- D. Michael McIntyre |
From: M. D. <ztr...@ad...> - 2007-08-01 23:15:11
|
On Wednesday 01 August 2007 17:50, Chris Cannam wrote: > Possibly, yeah. Perhaps it's better to resolve the question of how > voices should be supported first, then. Looks like we're getting back > into the "big plan" stuff with a vengeance. It does seem that we can't get around the "big plan" stuff. I think we're getting close to something workable though. > If we take the premise that separate voices on the same staff may want > to use different MIDI programs or play through different devices, it > follows that separate voices have to occupy separate tracks, or else > editing them in a "MIDI sequencer" is going to be appallingly hard work > (and I really don't think it would be a good idea to start messing > about with the MIDI sequencer interface to make it work). But is that > premise valid, or would it be reasonable to always insist that separate > voices in the same staff play with identical MIDI channel and program? To do tablature correctly requires at least 2 MIDI channels per staff at times. (e.g. Hold one note and bend another. This is actually required for std notation that allows for bends. This is such a common thing with guitar that I'll call it GtrLick1.) What I don't like is the thought of having GtrLick1 appearing on 2 Tracks in TrackEditor. It's unintuitive from a musical perspective and would be a hassle from a user perspective. I have some knowledge of MIDI and MIDI files, but I don't know much about the RG sequencer in particular. PowerTab outputs to MIDI in the "ugly" way. GtrLick1 would appear on 2 tracks in a MIDI file. Is the RG sequencer limited to handling things this way? Cakewalk doesn't output to MIDI that way. A track may consist of multiple MIIDI channels and can have program changes. Most MIDI files you find on the net are done this way. My point is that if GtrLick1 must appear as 2 tracks in TrackView, then the user is forced to deal with tracks as something more "raw" than what's found in a MIDI file. Is that what we want? If the answer is yes, then I would suggest the current TrackView be called RawTrackView or something and have a more "normal" or higher-level TrackView (one in which GtrLick1 appears in a single rectangle) available to the user. > > Question 3: If we have 1 voice per track, what do we call the Thingy > > that encapsulates a group of tracks for display as an atomic unit in > > a notation view. > > Staff. So you want to rename the current class we call Staff to StaffSegment or something? I'm all for that. My next iteration: 1 voice per Segment 1 voice per Track Composition owns a list of Scores. class Score { list<ScorePart*> m_staffs; string m_label; }; class ScorePart { StaffType m_staffType; // StdStaff, GrandStaff, StdTabStaff, etc. TablatureTuning m_tabTuning; map<Segment*, int> m_segmentVoiceMap; // <segment, voice> map<int, int> m_voiceSubstaffMap; // <voice, substaff> string m_label; }; ScorePart represents one unit in a musical score (e.g. the flute part on a StdStaff, or the piano on a GrandStaff). ScorePart could have a couple of constructors to give the user different levels of control. The simplest would be ScorePart(list<Track*> trkList, StaffType type, TablatureTuning tabTuning); The first track is mapped to voice 0, the second to voice 1, etc. On a StdStaff, even voices get stem-up, odd stem-down. For a GrandStaff voice 0 is top part stem-up, voice 1 top part stem-down, voice 2 bottom part stem-up, and voice 3 bottom part stem-down. The user configures the Score in a (tabbed) dialog. NotationView has acces to Score. If opened by selecting a bunch of segments in TrackEditor followed by "Open in Notation View", a new score is created using sensible defaults. When the view is closed, user can save the Score or just let it be deleted. Scores get saved in the .rg file. We could do templates and all sorts of convenience stuff later. |
From: Chris C. <ca...@al...> - 2007-08-02 19:31:35
|
On Thursday 02 August 2007 00:13, M. Donalies wrote: > To do tablature correctly requires at least 2 MIDI channels per staff > at times. (e.g. Hold one note and bend another. This is actually > required for std notation that allows for bends. This is such a > common thing with guitar that I'll call it GtrLick1.) Ugh. I wasn't aware of that. I suppose it's inevitable, given that MIDI pitch bend is a channel message. > I have some knowledge of MIDI and MIDI files, but I don't know much > about the RG sequencer in particular. PowerTab outputs to MIDI in the > "ugly" way. GtrLick1 would appear on 2 tracks in a MIDI file. Is the > RG sequencer limited to handling things this way? Cakewalk doesn't > output to MIDI that way. A track may consist of multiple MIIDI > channels and can have program changes. Most MIDI files you find on > the net are done this way. Rosegarden splits these into one track per channel. You can only send to a single channel from a Rosegarden track, and that's for the very simple reason that all of the properties associated with the MIDI channel appear at track level (or more accurately at instrument level, but the track has only a single instrument) in Rosegarden. Note that this is also consistent with the way plugin instruments work -- everything on a track goes to the same plugin. I have to say that I think this is a good feature of Rosegarden; it's certainly very much core to the GUI of the program. And I don't think I would see much benefit in changing it (doing so would probably be fairly easy from a data-storage point of view but very hard from a GUI point of view) for the sake of being able to play directly a particular guitar idiom, especially given all the other existing notational idioms that we can't play directly either. (I don't have much time right now so I'm just replying to a couple of obvious things -- I hope to get back to the rest of your email tomorrow) Chris |
From: Chris C. <ca...@al...> - 2007-08-02 19:33:24
|
On Wednesday 01 August 2007 23:06, D. Michael McIntyre wrote: > That's clunky, but I don't think it will > happen very often in the real world, because people who use the > voices in this fashion are the least likely to be people who really > value the MIDI performance as an end to itself. Composers I think routinely aim for a reasonably high quality "draft" performance of a piece, if only to give the musicians more of an idea what they have in mind. > If that's the > only think you're worried about for splittable multi-voice parts, > then we should go do some homework and see what Sibelius/Finale/etc. > do. I would bet you can't split voices out into separate staffs with > those high dollar big boys, but I could be wrong. Part extraction was the main selling point of Sibelius 4, I think. Chris |
From: D. M. M. <mic...@ro...> - 2007-08-02 22:42:09
|
On Thursday 02 August 2007, Chris Cannam wrote: > Composers I think routinely aim for a reasonably high quality "draft" > performance of a piece, if only to give the musicians more of an idea > what they have in mind. Oh come on, composers routinely use software that barely offers them any control at all over what kind of sound they get for a given part. I haven't played with any of it extensively, but the demo files out of the box for both Sibelius and Finale sound like complete crap. Think back to William, who never wanted any of the control we forced him to suffer through, and just wanted us to play any lame trumpet when he wrote a part for a trumpet. That's what mScore does too. They have a hard coded built-in FluidSynth, and offer users no control at all. (If they have any users, but that's beside the point.) > > do. I would bet you can't split voices out into separate staffs with > > those high dollar big boys, but I could be wrong. > > Part extraction was the main selling point of Sibelius 4, I think. If I'm not much mistaken, that refers to being able to separate the master orchestral score out into individual parts for the various players involved. I could be mistaken. I wouldn't know, since that feature is busted in the demo version. -- D. Michael McIntyre |
From: D. M. M. <mic...@ro...> - 2007-08-02 22:27:19
|
On Thursday 02 August 2007, Chris Cannam wrote: > On Thursday 02 August 2007 00:13, M. Donalies wrote: > > To do tablature correctly requires at least 2 MIDI channels per staff > Ugh. I wasn't aware of that. I suppose it's inevitable, given that > MIDI pitch bend is a channel message. > guitar idiom, especially given all the other existing notational idioms > that we can't play directly either. I definitely agree with you that being able to reproduce pitch bends in MIDI is not something over which we should consider turning the world upside down. There are really quite a few things in Rosegarden that only exist as symbols when I think about it. Bow marks, fermatas, phrasing slurs, all of the LilyPond directives, legato, staccato, and probably even more. It's annoying, but I feel like they more or less represent the upper boundary of what we can realistically accomplish. Pitch bends could definitely and easily fall into this category, and I don't think we'll be interpreting strum directives, or playing the right sort of sound for this note played on the low E string vs. this same note played on the A string, for that matter. That last one is an interesting theoretical possibility. We know what fret on what string, and coupled with a good set of samples, we *could* do that, but let's not go there. IMHO there's limited traction to be gained trying to make MIDI sound like a real guitar anyway. In my early days, I spent a lot of time and effort doing just that. I used to painstakingly diddle the start times and durations of every chord by hand to make the part sound more realistic, but that was before I had the capability just to go record the damn guitar. Be all of that as it may, if there were a special guitar track type that could play on two channels, it might be possible to represent this in the GUI without it becoming too mind bending. Limit both channels to occurring on the same physical playback device, the same program, and offer to send to channel A and channel B. It *could* be done, though it breaks the idea of one instrument, one channel, and potentially opens the door to add even more confusion to something that's already confusing as hell for new people to grasp. Look at those nightmare diagrams in my book trying to explain how all this crap fits together. Look at how many times I had to rewrite all of that because I had previously COMPLETELY missed the point, and I had just written a heap of total nonsense as though it were the gospel truth. Remember all our old arguments about what from what, and not putting "instrument" in quotes, etc. It's a BIG can of worms to re-open, and I'd be really reluctant to do so. Though I can conceive of doing so if the guitar notation writing Rosegarden users come to some huge concensus that being able to reproduce pitch bends is critically important. (Me, as a guitarist, I don't really do much with bends anyway, so I have no vested interest either way.) -- D. Michael McIntyre |
From: Chris C. <ca...@al...> - 2007-08-03 07:38:15
|
On Thursday 02 August 2007 23:27, D. Michael McIntyre wrote: > On Thursday 02 August 2007, Chris Cannam wrote: > > On Thursday 02 August 2007 00:13, M. Donalies wrote: > > > To do tablature correctly requires at least 2 MIDI channels per > > > staff > > > > Ugh. I wasn't aware of that. I suppose it's inevitable, given > > that MIDI pitch bend is a channel message. > > > > guitar idiom, especially given all the other existing notational > > idioms that we can't play directly either. > > I definitely agree with you that being able to reproduce pitch bends > in MIDI is not something over which we should consider turning the > world upside down. Another point about this one is that although it cropped up during a discussion of the pros and cons of a handful of different mappings between staff, voice, segment and track, _none_ of the proposed suggestions would actually allow you to have different "playback instruments" on the same voice, which is what is called for in this situation. This problem arises because of something different about Rosegarden than its (lack of) voice handling, namely the mapping between tracks and instruments which simply can't support this particular performance hack. Chris |
From: Pedro Lopez-C. <ped...@gm...> - 2007-08-03 08:01:14
|
On 8/3/07, Chris Cannam <ca...@al...> wrote: > On Thursday 02 August 2007 23:27, D. Michael McIntyre wrote: > > On Thursday 02 August 2007, Chris Cannam wrote: > > > On Thursday 02 August 2007 00:13, M. Donalies wrote: > > > > To do tablature correctly requires at least 2 MIDI channels per > > > > staff > > > > > > Ugh. I wasn't aware of that. I suppose it's inevitable, given > > > that MIDI pitch bend is a channel message. > > > > > > guitar idiom, especially given all the other existing notational > > > idioms that we can't play directly either. > > > > I definitely agree with you that being able to reproduce pitch bends > > in MIDI is not something over which we should consider turning the > > world upside down. But being Rosegarden a MIDI Sequencer, it should take into account MIDI guitar conventions. These guitars usually send events in 6 different channels at once, one for each string. The recorded MIDI channel is recorded as a property for each event, but it is not used for playback. If you record a MIDI guitar on only one track, the playback channel assigned to the instrument for this track is used to play the track, which means that the recorded pitch bend events are wrongly applied to all the strings. The workaround is to record the MIDI guitar on 6 different tracks using recording filters. > Another point about this one is that although it cropped up during a > discussion of the pros and cons of a handful of different mappings > between staff, voice, segment and track, _none_ of the proposed > suggestions would actually allow you to have different "playback > instruments" on the same voice, which is what is called for in this > situation. > > This problem arises because of something different about Rosegarden than > its (lack of) voice handling, namely the mapping between tracks and > instruments which simply can't support this particular performance > hack. Didn't we had a discussion some time ago about moving Instrument from Track to the Segment level, and moving the playback channel from Instrument to Track? Regards, Pedro |
From: Chris C. <ca...@al...> - 2007-08-03 08:21:11
|
On Friday 03 August 2007 09:01, Pedro Lopez-Cabanillas wrote: > But being Rosegarden a MIDI Sequencer, it should take into account > MIDI guitar conventions. These guitars usually send events in 6 > different channels at once, one for each string. Hm. That's not very nice. MIDI and guitars really shouldn't be mixed... > Didn't we had a discussion some time ago about moving Instrument from > Track to the Segment level, and moving the playback channel from > Instrument to Track? Yes, now you come to mention it again -- you had a proposal which I found quite compelling, but I can't remember all the details now. Any likelihood you can find it and summarise/paste it onto the wiki? I expect I can look it up if you can't. Perhaps it could be the missing piece of the jigsaw. Did you mean "Instrument to Track" in the last sentence, btw? If so, I don't see how that would solve the guitar problem. Chris |
From: Pedro Lopez-C. <ped...@gm...> - 2007-08-03 16:19:41
|
On Friday, 3 August 2007 10:28, Chris Cannam wrote: > > Didn't we had a discussion some time ago about moving Instrument from > > Track to the Segment level, and moving the playback channel from > > Instrument to Track? > > Yes, now you come to mention it again -- you had a proposal which I > found quite compelling, but I can't remember all the details now. Any > likelihood you can find it and summarise/paste it onto the wiki? I > expect I can look it up if you can't. Perhaps it could be the missing > piece of the jigsaw. I will try to write a summary for the wiki ASAP. Meanwhile, here is the thread http://thread.gmane.org/gmane.comp.audio.rosegarden.devel/2313/focus=2387 > Did you mean "Instrument to Track" in the last sentence, btw? If so, I > don't see how that would solve the guitar problem. Yes, my proposal was to independise the outgoing MIDI channel from the Instrument and move it away, perhaps to the track level. But you are right, these are two unrelated problems, and you still need to use different tracks for differents guitar strings. To clarify the MIDI channel / Track / Instrument problem in Rosegarden: We store a MIDI channel property for each recorded MIDI channel event. But this property is not used for playback. Other software sequencers, like Sonar/Cakewalk, allow the user to assign to each track an optional forced MIDI channel for playback. If the forced MIDI channel is not set, then the original recorded MIDI channel is also used for playback. We don't have this option in Rosegarden: there is always a forced MIDI output channel for each Instrument. It should be not very hard to implement the other policy in Rosegarden, though. We would need a "default MIDI channel" for events created programmatically. Regards, Pedro |
From: M. D. <ztr...@ad...> - 2007-08-03 15:10:18
|
On Thursday 02 August 2007 22:27, D. Michael McIntyre wrote: > On Thursday 02 August 2007, Chris Cannam wrote: > > On Thursday 02 August 2007 00:13, M. Donalies wrote: > > > To do tablature correctly requires at least 2 MIDI channels per staff > > > > Ugh. I wasn't aware of that. I suppose it's inevitable, given that > > MIDI pitch bend is a channel message. > > > > guitar idiom, especially given all the other existing notational idioms > > that we can't play directly either. > > I definitely agree with you that being able to reproduce pitch bends in > MIDI is not something over which we should consider turning the world > upside down. There are really quite a few things in Rosegarden that only > exist as symbols when I think about it. Bow marks, fermatas, phrasing > slurs, all of the LilyPond directives, legato, staccato, and probably even > more. It's annoying, but I feel like they more or less represent the upper > boundary of what we can realistically accomplish. This really sucks. I didn't realize that the sequencer was so primitive. Even Powertab can do those things, and it's pretty darn limited in its capabilities. If it's just a matter of putting the bend on a separate track (1 voice per track), then it's just a matter of displaying 2 tracks on the same staff, which we want to do anyway. If bends in can't be reproduced at all, I don't see much point in having tablature in RG. Might as well just do it in Lilypond. > Pitch bends could definitely and easily fall into this category, and I > don't think we'll be interpreting strum directives, or playing the right > sort of sound for this note played on the low E string vs. this same note > played on the A string, for that matter. That last one is an interesting These 2 are far less important than bends, and the last one doesn't really matter at all for my purposes. Not handling legato sounds ugly, but you can still figure out what's going on. Not handling rolling of chords and such is of somewhat low importance as well. But if bends don't sound, you don't even have the right pitch, which is usually a pretty important thing. Let's take a popular, but trivially easy song to play: Clapton's Wonderful Tonight. MIDI playback would be completely useless without bends. Now something like that is easy to figure out and you probably don't need MIDI playback to learn the song. But it's a different matter if you're trying to figure out how to play something like Erik Johnson's Cliffs of Dover. MIDI playback is a big help here. With my own music, I definitely need the playback. I write pieces that I can't immediately play. I often change parts before the whole piece is completed. It's much easier to input it as MIDI, finish composing, and then go back and learn how to play it than it would be to record a myriad of slightly different guitar takes, throwing most of them away. |
From: Chris C. <ca...@al...> - 2007-08-03 16:42:57
|
On Friday 03 August 2007 16:06, M. Donalies wrote: > This really sucks. I didn't realize that the sequencer was so > primitive. I'm not sure what exactly you are referring to. The sequencer certainly handles pitch bends. We even have example files that use them in guitar parts, such as stormy-riders.rg. They're hard to edit once recorded, but that's a different problem. What Michael and I seem to be more or less agreeing on is that it isn't necessarily a good trade-off to change the channel management in Rosegarden (i.e. the association of a single channel with each track -- in essence, the definition of a MIDI track in Rosegarden is "a set of events that all play to the same channel") just to provide automatic playback of a particular idiom that guitarists use to work around the limitations of MIDI pitch bends -- namely sending notes to more than one channel at once because pitch bends in MIDI cannot be applied to individual notes but only to whole channels. That limitation is built in to MIDI, it has nothing to do with the sequencer being Rosegarden. That said, there could be workarounds that wouldn't involve changing the whole model (as Pedro suggests). Chris |
From: Pedro Lopez-C. <ped...@gm...> - 2007-08-03 17:54:08
|
On Friday, 3 August 2007 10:28, Chris Cannam wrote: > Any likelihood you can find it and summarise/paste it onto the wiki? I > expect I can look it up if you can't. It was already there: http://rosegarden.wiki.sourceforge.net/Instruments+and+Devices Problem description: We would like a way to insert MIDI program changes in the middle of tracks. This is a common feature in many sequencer programs, but it is not currently supported by RG. Also, the "import MIDI files" function can't be properly fixed now to process MIDI files having PC events in the middle of the tracks. Proposal summary: Put program changes at the beginning of the segments. Implement this in Rosegarden associating the Instrument (which holds the bank/program numbers) to the Segment class instead of the Track. Change the MIDI file import function to create a new segment whenever it finds a bank/program change events in the middle of a track. Regards, Pedro |
From: D. M. M. <mic...@ro...> - 2007-08-03 22:00:30
|
On Friday 03 August 2007, Pedro Lopez-Cabanillas wrote: > We would like a way to insert MIDI program changes in the middle of tracks. > This is a common feature in many sequencer programs, but it is not > currently supported by RG. Although that isn't strictly true. We do have a way to insert program changes in the middle of segments (and by extension, tracks) but it is horrible to use, and broken, because it is not possible to send a bank along with the program. I intended to roll up my sleeves and fix this myself, but I gave up quickly when I hit a brick wall. I can't remember the details of the brick wall, but it had roots somewhere deep in base/ and had the potential to break things all over the place if I altered the fundamental nature of the poorly conceived base class, which was designed without any understanding that a particular patch occurs at a bank/program address, not just a program address. There was some other problem with trying to pretty up the interface with verbose program names too. No track parameters back then, or maybe it was this whole track/segment/instrument relationship we're talking about altering now. -- D. Michael McIntyre |
From: Chris C. <ca...@al...> - 2007-08-04 13:33:09
|
On Friday 03 August 2007 18:52, Pedro Lopez-Cabanillas wrote: > It was already there: > http://rosegarden.wiki.sourceforge.net/Instruments+and+Devices Oops. I made that page. > Put program changes at the beginning of the segments. Implement this > in Rosegarden associating the Instrument (which holds the > bank/program numbers) to the Segment class instead of the Track. A problem with this of course is the need for (and perhaps consequences of) sending bank/program changes each time playback enters a segment whose program differs from that of the segment previously playing on that track. Chris |
From: D. M. M. <mic...@ro...> - 2007-08-03 22:19:33
|
On Friday 03 August 2007, M. Donalies wrote: > > I definitely agree with you that being able to reproduce pitch bends in > > MIDI is not something over which we should consider turning the world > This really sucks. I didn't realize that the sequencer was so primitive. That didn't come out very well. As Chris said, of course we can do pitch bends. The problem is we can't use two different channels from within the same track, so there isn't any way to make it automagically play the straight notes on channel X and the bent notes on channel Y. (How does real guitar software handle it if there is more than one bend simultaneously, by more than one interval, say this note is bent a half step, and midway through, this other note comes in bent a whole step, or something. I'm making up a scenario. I'm too stupid to play tab that has bends written in it anyway. I have to figure it out by ear, or mostly I just give up. I'm a real underachiever as a guitarist, as you've doubtless noticed. :) ) Pedro raises an interesting point about channel info being stored with an event, and the track level channel providing a default path if no other path is specified. Real sequencers do do this, and it's one of the things that makes Rosegarden not quite a real sequencer. In Cakewalk (ca. 1991 edition) it was definitely possible to use the controller drawing mechanism to pick which channel the controllers would affect, and it worked independently of whatever channel was assigned to that track. Implementing something like that could have real merit in making us more legitimate as a true sequencer, plus it might also provide a handy mechanism for what Michelle needs to make her tab behave the way she wants. I'm not disparaging the idea of making pitch bend symbols do something. Not at all. It's just that if we had pitch bend symbols that didn't actually do anything audible, they would be in good company with our many other broken or half assed features. (One particularly broken feature that's a great parallel example is the unfortunate matter of our grace notes. I can draw them, but I'm on my own figuring out how they're supposed to sound. Rosegarden gets it abysmally wrong 100% of the time.) -- D. Michael McIntyre |
From: Chris C. <ca...@al...> - 2007-08-04 13:19:58
|
OK, I think this thread has been derailed by all the discussion about channel and track mappings, which perhaps has turned out to be mostly a distraction. Coming back to an earlier email and a part I didn't reply to: On Thursday 02 August 2007 00:13, M. Donalies wrote: > So you want to rename the current class we call Staff to StaffSegment > or something? Sounds OK to me. > My next iteration: > 1 voice per Segment > 1 voice per Track > Composition owns a list of Scores. As well as the existing list of tracks (i.e. in my mind at least, the main sequencer view would not necessarily change much -- the score management views/dialogs would be elsewhere). > class ScorePart > { > StaffType m_staffType; // StdStaff, GrandStaff, StdTabStaff, etc. > TablatureTuning m_tabTuning; > map<Segment*, int> m_segmentVoiceMap; // <segment, voice> > map<int, int> m_voiceSubstaffMap; // <voice, substaff> > string m_label; > }; I'm not sure that it's necessary to identify voices directly. Rather than saying "this segment is voice 0" and defaulting to "voice 0 is stem-up on the top staff", why not say "this segment is stem-up on the top staff"? You can always derive voice numbers from this when you render it all, assuming you need them for user interaction purposes, and they'll be consistent between renderings. For identifying voices to the user, the segment names are now perfectly acceptable as each segment contains material from only a single voice. We can also use the instrument database stuff (associated with track in current RG) to define default staff types for segments. I imagine most of these details would be open to experimentation -- there'll probably turn out to be hideous problems with most of the possibilities. > The user configures the Score in a (tabbed) dialog. NotationView has > acces to Score. If opened by selecting a bunch of segments in > TrackEditor followed by "Open in Notation View", a new score is > created using sensible defaults. When the view is closed, user can > save the Score or just let it be deleted. > > Scores get saved in the .rg file. We could do templates and all sorts > of convenience stuff later. Yes, yes and yes. Also, when printing to Lilypond or wherever from the main view, you could choose one of the set of known scores (or just "everything arranged per default") to print out. Chris |
From: M. D. <ztr...@ad...> - 2007-08-04 15:11:24
|
On Saturday 04 August 2007 13:27, Chris Cannam wrote: > > My next iteration: > > 1 voice per Segment > > 1 voice per Track > > Composition owns a list of Scores. > > As well as the existing list of tracks (i.e. in my mind at least, the > main sequencer view would not necessarily change much -- the score > management views/dialogs would be elsewhere). You mean "Composition owns a list of Tracks"? I don't see any reason to change that. I was thinking of ScoreView as a window opening from a menu or toolbar button or something, pretty much leaving all current track view stuff alone. > > class ScorePart > > { > > StaffType m_staffType; // StdStaff, GrandStaff, StdTabStaff, etc. > > TablatureTuning m_tabTuning; > > map<Segment*, int> m_segmentVoiceMap; // <segment, voice> > > map<int, int> m_voiceSubstaffMap; // <voice, substaff> > > string m_label; > > }; > > I'm not sure that it's necessary to identify voices directly. Rather > than saying "this segment is voice 0" and defaulting to "voice 0 is > stem-up on the top staff", why not say "this segment is stem-up on the > top staff"? You can always derive voice numbers from this when you > render it all, assuming you need them for user interaction purposes, > and they'll be consistent between renderings. For identifying voices > to the user, the segment names are now perfectly acceptable as each > segment contains material from only a single voice. I'll have to think about this. I have to sit down and work it out on paper. > > Scores get saved in the .rg file. We could do templates and all sorts > > of convenience stuff later. > > Yes, yes and yes. Also, when printing to Lilypond or wherever from the > main view, you could choose one of the set of known scores (or > just "everything arranged per default") to print out. Yes, once the mechanism is set up, printing or exporting to Lilypond should follow without too much trouble. I think I can come up with at least the basics of score layout. I think I'm going to start with the underlying code rather than a user interface (i.e. ScoreView waits till later). My use case: User selects a bunch of Segments in TrackEditor and right clicks on "Open in Notation View". NotationView will create a default Score from the selected Segments. NotationView also creates a list of Staffs. The Staffs create StaffSegments. Each Staff is passed a font name and size to create its own NotePixmapFactory. Each Staff also needs its own StaffHLayout and StaffVLayout (to scan the staff, position chords, etc.). NotationView creates a ViewHLayout and ViewVLayout (possibly combined into a single class?) which take care of reconciling the staffs. I have prototype code laying around somewhere, which does something like this and allows for multiple staff types in the same view. A Staff consists of a single set of lines. The Score should hold the logic of combining staffs into a grand staff or std tablature. ViewHLayout and ViewVLayout should have access to Score so that connecting lines and such can be added. Sound reasonable? |