|
From: James T. <ja...@fl...> - 2020-08-06 08:50:34
|
> On 5 Aug 2020, at 22:55, Julian Smith <ju...@op...> wrote: > > But ultimately it's your choice to do so, and if you did less of it the > world would still carry on turning. So on the contrary i'd say you > could certainly reduce the amount of time you spend on bug-fixes, > releases, or merge requests, or code-reviews. > Oh, absolutely, it’s self inflicted :) And as Thorsten R mentioned in his reply, there’s plenty of other people who do boring work as well, or have done in the past. There is however a timeliness factor: if merge requests or ‘how do I do X’ questions go unanswered, we lose the chance of new contributions, which would be real shame. So I do feel (again, self imposed) obligation to respond and progress things , especially patches and merge request and ‘actionable’ bug reports. I would be very happy to share that load, but I’m not okay with just letting it grind to a halt. > > Another way of putting this (and i think someone else said something > similar recently) is that we are perhaps trying to behave like a large > company with low-frequency high quality long-term-support releases, with > associated releases branches and back-porting of fixes, plus a separate > 'cutting edge' branch for ongoing development. But it's really > difficult because we only have a tiny fraction of the person-power that > this approach requires. > Well, here there’s good news, bad news, and ugly news: (I just made the last one up, but I couldn’t resist .. also RIP Ennio Morricone, you made some awesome music...) The good news: we’re already doing this: we try to keep next always running and flyable, and if stuff breaks that, we fix it (or in rare cases, revert), In principle, next is always in a release-able state, as far as is practical. My email a few months ago was to specifically state that unusually, we would be briefly deviating for that situation, while getting some major restructuring done. So we could just rename the nightly builds to ‘FlightGear Insiders’ (can you tell I run VS Code…) and reword the splash screen message to say something a bit more positive about its quality. (We did used to do four ‘mostly automated’ releases a year, where we would do some manual checking of more stuff, to ensures more users were running something close to next) The bad news: FlightGear is not an application, it’s an SDK. The reason the above doesn’t work, especially the quarterly releases, is not about the code quality per-se, but rather about aircraft compatibility. We don’t do LTS for the sake of fixing a random bug in the UI, we do it so end-users can have an expectation of particular aircraft they downloaded from $random-website working or not working. And we have no control over aircraft developers, and often not much contact with them. It’s Long-term STABLE, not Long-term SUPPORT, even if we do some support to be nice. So part of the idea of the LTS (from my side) is to have a setup where flight-simmers understand that just because their plane worked in Xplane 10, doesn’t mean it will work in Xplane 11: and therefore just because it works in FG 2018.3, does /not/ mean it will just work in FG 2020.2, at least without checking. (And we should make the checking process accessible and community driven - hence the ‘aircraft testing’ stuff we’re trying to encourage right now) Hence the long release cycle: it’s not actually about fixing core bugs (although that will be nice, to focus people’s attention on bugs rather than new features….), it’s about checking aircraft and giving a stable and compatible platform. The ugly news: the evidence suggests, again as Thorsten R mentioned, that we struggle to get usable bug reports or feedback in /any/ of these development models. (That’s why I did some work adding Sentry.io crash reporting, to automate part of the process, once it’s done). Because of the patchy testing, we do regress things on next, because we can’t test even 30% of the esoteric, unmaintained features of the codebase. And when users do make reports, it’s often hard to action then: if it crashed on their favourite aircraft while doing an approach at the end of an eight hour flight, that’s super annoying for them, but doesn’t give is any way to fix it (again, this is why I did some work on automation for the crashes at least) The problems with Threaded GC are a good example of this: it never crashed for me (or Richard) : so it took absolutely ages for anyone to figure out what was going on. So next was ‘a bit less stable’, but not in a way anyone could have identified quickly, that i can see. One improving area is unit-testing: because of which, certain areas and features (eg, carrier start) now ‘can’t break’. As we add more such areas (eg Multi-player, AI, protocols and replay are all possible), we increase the baseline quality, and also have a clearer idea when we make incompatible changes. (The idea being we capture the ‘supported API’ in the tests: when an aircraft deviates from that, we can decide to add another test case, fix the aircraft, etc). Of course, there’s some pretty major areas where Automated Testing Is Hard (TM). None of this is to say things can’t be changed, but to try to explain why we are where we are :) Kind regards, James |