|
From: James T. <zak...@ma...> - 2016-11-13 10:19:41
|
> On 13 Nov 2016, at 03:57, Geoff McLane <ub...@ge...> wrote: > > How quickly 'fgms' providers then update is another > thing... out of our control... Well I would like to fix that so we can actually ensure the majority of the network is /not/ running ancient versions. But I am not sure if you (Geoff) or anyone else (Curt) has some email group of ‘the server operators’. In terms of FGMS changes you are correct that adding new packet types should not affect the server, and I think with some intelligent changes we can reduce the bandwidth, not increase it. But there are some particular clients (especially ATC) that it would benefit if FGMS handled them specially, and this brings us into the realm of authentication for some FGMS users. I asked a question some days ago about if it would be better to decrease the packet size for the server operator’s bandwidth concerns - at the physical layers an ethernet or ATM frame is a frame, with a size. Sending a big packet doesn’t take much longer than a small packet. But if FGMS operators are billed / capped by /bytes/, reducing the packet size would help them. Adding a ‘ping’ message so FGFS can auto-select the closest server would be a big step, but should be easy on the FGMS side. (only complexity is a rate limit to avoid a bad client flooding the server with pings) So I think any FGMS changes will likely be quite small, but it would be good to ensure the ‘main network’ is relatively close to current versions. In a perfect would we would have an automated system to update the servers, I would welcome any thoughts on ideas how to achieve that. (Doesn’t need to be smart, could be as simple as git pull; make; make check -> if it passes make install; restart fgms) Kind regards, James |