You can subscribe to this list here.
2008 |
Jan
|
Feb
|
Mar
(41) |
Apr
(35) |
May
(18) |
Jun
(5) |
Jul
(4) |
Aug
(37) |
Sep
(9) |
Oct
(20) |
Nov
(50) |
Dec
(217) |
---|---|---|---|---|---|---|---|---|---|---|---|---|
2009 |
Jan
(212) |
Feb
(76) |
Mar
(113) |
Apr
(88) |
May
(130) |
Jun
(54) |
Jul
(208) |
Aug
(223) |
Sep
(112) |
Oct
(63) |
Nov
(131) |
Dec
(103) |
2010 |
Jan
(247) |
Feb
(130) |
Mar
(43) |
Apr
(92) |
May
(40) |
Jun
(43) |
Jul
(43) |
Aug
(80) |
Sep
(44) |
Oct
(74) |
Nov
(21) |
Dec
(46) |
2011 |
Jan
(36) |
Feb
(11) |
Mar
(21) |
Apr
(33) |
May
(4) |
Jun
(12) |
Jul
(5) |
Aug
(20) |
Sep
|
Oct
(64) |
Nov
(26) |
Dec
(71) |
2012 |
Jan
(13) |
Feb
(24) |
Mar
(11) |
Apr
(2) |
May
(10) |
Jun
(5) |
Jul
(13) |
Aug
(7) |
Sep
(26) |
Oct
(22) |
Nov
(17) |
Dec
(16) |
2013 |
Jan
(6) |
Feb
(6) |
Mar
(6) |
Apr
(8) |
May
(20) |
Jun
|
Jul
(1) |
Aug
(4) |
Sep
(18) |
Oct
(3) |
Nov
(14) |
Dec
(33) |
2014 |
Jan
(26) |
Feb
(6) |
Mar
(69) |
Apr
(10) |
May
|
Jun
(8) |
Jul
(18) |
Aug
(22) |
Sep
(19) |
Oct
(17) |
Nov
|
Dec
(4) |
2015 |
Jan
(14) |
Feb
(18) |
Mar
|
Apr
|
May
(26) |
Jun
(8) |
Jul
(9) |
Aug
(10) |
Sep
(15) |
Oct
(2) |
Nov
(30) |
Dec
(33) |
2016 |
Jan
(1) |
Feb
(24) |
Mar
(19) |
Apr
(1) |
May
|
Jun
(3) |
Jul
(1) |
Aug
(1) |
Sep
(20) |
Oct
(5) |
Nov
(14) |
Dec
(4) |
2017 |
Jan
(15) |
Feb
(35) |
Mar
(10) |
Apr
(9) |
May
(14) |
Jun
(33) |
Jul
(1) |
Aug
(27) |
Sep
(7) |
Oct
|
Nov
(10) |
Dec
(15) |
2018 |
Jan
(29) |
Feb
|
Mar
(2) |
Apr
(1) |
May
(11) |
Jun
|
Jul
(1) |
Aug
(8) |
Sep
(11) |
Oct
(22) |
Nov
(9) |
Dec
(13) |
2019 |
Jan
(1) |
Feb
(7) |
Mar
(3) |
Apr
(21) |
May
(34) |
Jun
(36) |
Jul
(18) |
Aug
(17) |
Sep
(19) |
Oct
(8) |
Nov
(3) |
Dec
|
2020 |
Jan
|
Feb
(4) |
Mar
(8) |
Apr
(29) |
May
(50) |
Jun
(8) |
Jul
(2) |
Aug
(10) |
Sep
(1) |
Oct
(7) |
Nov
(9) |
Dec
(19) |
2021 |
Jan
(2) |
Feb
(9) |
Mar
(6) |
Apr
(21) |
May
(13) |
Jun
(11) |
Jul
(2) |
Aug
(1) |
Sep
(3) |
Oct
(26) |
Nov
(2) |
Dec
(16) |
2022 |
Jan
(8) |
Feb
(7) |
Mar
(1) |
Apr
(13) |
May
(1) |
Jun
(4) |
Jul
(4) |
Aug
(1) |
Sep
(1) |
Oct
|
Nov
|
Dec
(1) |
2023 |
Jan
(2) |
Feb
(3) |
Mar
(16) |
Apr
|
May
(2) |
Jun
(1) |
Jul
(4) |
Aug
(13) |
Sep
(8) |
Oct
(6) |
Nov
(4) |
Dec
|
2024 |
Jan
(3) |
Feb
(3) |
Mar
|
Apr
|
May
(1) |
Jun
|
Jul
(5) |
Aug
|
Sep
(1) |
Oct
|
Nov
(5) |
Dec
|
2025 |
Jan
(4) |
Feb
(2) |
Mar
|
Apr
(11) |
May
(1) |
Jun
(9) |
Jul
(18) |
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
From: Lonnie A. <li...@lo...> - 2020-05-01 23:32:34
|
> On May 1, 2020, at 6:13 PM, Michael Knill <mic...@ip...> wrote: > > Thanks Lonnie for all your work. > I did a random check of my systems and all good as far as no additional binaries added. I do have one site that has binaries added for QueueMetrics but this is in /mnt/kd/bin which I assume will still be fine. Yes, /mnt/kd/bin is perfectly fine. > I have a few APU1's out there which are certainly slower than APU2's. Are they going to be impacted significantly do you think? Last I checked (a few years ago) the APU1 was faster than an APU2 for single core, the APU2 wins for multicore. APU1 -> Performance: 59.1 secs. (Single-core test, lower is better) APU2 -> Performance: 66.6 secs. (Single-core test, lower is better) Test# time ( echo "scale=3456; 4*a(1)" | bc -l ) I would expect the APU1 to be just fine, no noticeable change. Also keep in mind this FUSE unionfs change does not effect pure CPU driven code, only when the filesystem accesses the '/etc' or '/stat' paths. Lonnie > Regards > Michael Knill > > On 2/5/20, 4:27 am, "Lonnie Abelbeck" <li...@lo...> wrote: > > Hi Dev minded folks, > > We have completed a significant transition from a kernel based unionfs filesystem to a FUSE (in-kernel + userland) unionfs filesystem. > > A key goal, any change needs to be backward compatible with pre-existing ASTURW partitions. > > With that, there are a few changes. All good changes, we hope. :-) > > The old kernel unionfs (via a kernel patch) has not been supported for years, we stretched it's usefulness to the linux kernel 3.16.x, but it is an anchor keeping us from moving to newer kernel versions. > > We considered a shiny new anchor :-), Aufs, which is also implemented via a kernel patch, more featured/bloated than the previous kernel unionfs and it has targeted support for only select kernel versions. > > Given the decision for AstLinux to only support 64-bit, x86_64 boards going forward, we wondered how the FUSE unionfs would perform. > > We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the PC Engines APU2 ... > > > Previously the full root '/' read-only mount was overlaid with the writable ASTURW partition to yield a read/write filesystem ... unionfs. > > While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, the filesystem speed suffered on the APU2, for example reloading the firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a lot of userland <-> kernel context switches. > > So then we thought, we really only need (mostly) the '/etc' and '/stat' directories to be overlaid to be made writable, which allowed moving the unionfs mount from the initrd into the /etc/rc startup script. That change returned the APU2 reload firewall performance to nearly it's previous time ... now the '/usr', '/lib', paths are very-slightly faster than before since there is no kernel unionfs overhead, but any '/etc' access is somewhat slower. Overall, no significant filesystem performance difference. > > So, FUSE unionfs for select directories is the new solution for Astlinux 1.3.10 and onward. No more kernel limiting anchors. > > This also better secures the filesystem, no accidental (or malicious) changing of '/usr', '/lib', binaries and scripts. > > The devil is in the details ... > > Some users may have partitioned their AstLinux installation by selecting "Combined Unionfs and /mnt/kd/ partition", so that is automatically supported when there is no separate ASTKD partition for '/mnt/kd'. > > Note: New installations now only support "Create separate Unionfs and /mnt/kd/ partitions", which has been the better choice. > > Some users may have installed binary blobs like codec_g729a.so to the Asterisk /usr/lib/asterisk/modules/ directory. The new default for this directory is to be read-only, but if the user adds ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be overlaid by the ASTURW partition, as it was before. This also creates an easy way to disable incompatible binary blobs from keeping Asterisk from starting. > > > Finally, some homework for those reading this (yes you!). To make sure we did not overlook anything, issue this command: > -- > show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e /asturw/mnt/kd > -- > > You might see > -- > /oldroot/mnt/asturw > /oldroot/mnt/asturw/.rnd > /oldroot/mnt/asturw/.asterisk_history > -- > which can be safely ignored. > > If you see some '/usr', '/lib', files that may seem important to you, let us know. > > > The AstLinux Team > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <mic...@ip...> - 2020-05-01 23:13:15
|
Thanks Lonnie for all your work. I did a random check of my systems and all good as far as no additional binaries added. I do have one site that has binaries added for QueueMetrics but this is in /mnt/kd/bin which I assume will still be fine. I have a few APU1's out there which are certainly slower than APU2's. Are they going to be impacted significantly do you think? Regards Michael Knill On 2/5/20, 4:27 am, "Lonnie Abelbeck" <li...@lo...> wrote: Hi Dev minded folks, We have completed a significant transition from a kernel based unionfs filesystem to a FUSE (in-kernel + userland) unionfs filesystem. A key goal, any change needs to be backward compatible with pre-existing ASTURW partitions. With that, there are a few changes. All good changes, we hope. :-) The old kernel unionfs (via a kernel patch) has not been supported for years, we stretched it's usefulness to the linux kernel 3.16.x, but it is an anchor keeping us from moving to newer kernel versions. We considered a shiny new anchor :-), Aufs, which is also implemented via a kernel patch, more featured/bloated than the previous kernel unionfs and it has targeted support for only select kernel versions. Given the decision for AstLinux to only support 64-bit, x86_64 boards going forward, we wondered how the FUSE unionfs would perform. We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the PC Engines APU2 ... Previously the full root '/' read-only mount was overlaid with the writable ASTURW partition to yield a read/write filesystem ... unionfs. While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, the filesystem speed suffered on the APU2, for example reloading the firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a lot of userland <-> kernel context switches. So then we thought, we really only need (mostly) the '/etc' and '/stat' directories to be overlaid to be made writable, which allowed moving the unionfs mount from the initrd into the /etc/rc startup script. That change returned the APU2 reload firewall performance to nearly it's previous time ... now the '/usr', '/lib', paths are very-slightly faster than before since there is no kernel unionfs overhead, but any '/etc' access is somewhat slower. Overall, no significant filesystem performance difference. So, FUSE unionfs for select directories is the new solution for Astlinux 1.3.10 and onward. No more kernel limiting anchors. This also better secures the filesystem, no accidental (or malicious) changing of '/usr', '/lib', binaries and scripts. The devil is in the details ... Some users may have partitioned their AstLinux installation by selecting "Combined Unionfs and /mnt/kd/ partition", so that is automatically supported when there is no separate ASTKD partition for '/mnt/kd'. Note: New installations now only support "Create separate Unionfs and /mnt/kd/ partitions", which has been the better choice. Some users may have installed binary blobs like codec_g729a.so to the Asterisk /usr/lib/asterisk/modules/ directory. The new default for this directory is to be read-only, but if the user adds ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be overlaid by the ASTURW partition, as it was before. This also creates an easy way to disable incompatible binary blobs from keeping Asterisk from starting. Finally, some homework for those reading this (yes you!). To make sure we did not overlook anything, issue this command: -- show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e /asturw/mnt/kd -- You might see -- /oldroot/mnt/asturw /oldroot/mnt/asturw/.rnd /oldroot/mnt/asturw/.asterisk_history -- which can be safely ignored. If you see some '/usr', '/lib', files that may seem important to you, let us know. The AstLinux Team _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <li...@mk...> - 2020-05-01 22:27:18
|
> Am 01.05.2020 um 23:57 schrieb David Kerr <da...@ke...>: > > Lonnie, > Great work. I see lots of updates in github so you have been very busy. A question on the combined unionfs and /mnt/kd (as I think that is how I originally installed -- how would I tell?). Hi David, you will see that on the Status tab: if you have 2 (combined) or 3 (separate) partitions. > When installing your newest build will this combined partition be un-combined and so our existing systems will have the same filesystem layout as a newly installed system? No, it will stay as it is and both variations are supported. > If not is there a way to manually make the change? No, except re-installing and re-formatting :-). > Thanks > David > > > On Fri, May 1, 2020 at 2:27 PM Lonnie Abelbeck <li...@lo...> wrote: > Hi Dev minded folks, > > We have completed a significant transition from a kernel based unionfs filesystem to a FUSE (in-kernel + userland) unionfs filesystem. > > A key goal, any change needs to be backward compatible with pre-existing ASTURW partitions. > > With that, there are a few changes. All good changes, we hope. :-) > > The old kernel unionfs (via a kernel patch) has not been supported for years, we stretched it's usefulness to the linux kernel 3.16.x, but it is an anchor keeping us from moving to newer kernel versions. > > We considered a shiny new anchor :-), Aufs, which is also implemented via a kernel patch, more featured/bloated than the previous kernel unionfs and it has targeted support for only select kernel versions. > > Given the decision for AstLinux to only support 64-bit, x86_64 boards going forward, we wondered how the FUSE unionfs would perform. > > We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the PC Engines APU2 ... > > > Previously the full root '/' read-only mount was overlaid with the writable ASTURW partition to yield a read/write filesystem ... unionfs. > > While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, the filesystem speed suffered on the APU2, for example reloading the firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a lot of userland <-> kernel context switches. > > So then we thought, we really only need (mostly) the '/etc' and '/stat' directories to be overlaid to be made writable, which allowed moving the unionfs mount from the initrd into the /etc/rc startup script. That change returned the APU2 reload firewall performance to nearly it's previous time ... now the '/usr', '/lib', paths are very-slightly faster than before since there is no kernel unionfs overhead, but any '/etc' access is somewhat slower. Overall, no significant filesystem performance difference. > > So, FUSE unionfs for select directories is the new solution for Astlinux 1.3.10 and onward. No more kernel limiting anchors. > > This also better secures the filesystem, no accidental (or malicious) changing of '/usr', '/lib', binaries and scripts. > > The devil is in the details ... > > Some users may have partitioned their AstLinux installation by selecting "Combined Unionfs and /mnt/kd/ partition", so that is automatically supported when there is no separate ASTKD partition for '/mnt/kd'. > > Note: New installations now only support "Create separate Unionfs and /mnt/kd/ partitions", which has been the better choice. > > Some users may have installed binary blobs like codec_g729a.so to the Asterisk /usr/lib/asterisk/modules/ directory. The new default for this directory is to be read-only, but if the user adds ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be overlaid by the ASTURW partition, as it was before. This also creates an easy way to disable incompatible binary blobs from keeping Asterisk from starting. > > > Finally, some homework for those reading this (yes you!). To make sure we did not overlook anything, issue this command: > -- > show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e /asturw/mnt/kd > -- > > You might see > -- > /oldroot/mnt/asturw > /oldroot/mnt/asturw/.rnd > /oldroot/mnt/asturw/.asterisk_history > -- > which can be safely ignored. > > If you see some '/usr', '/lib', files that may seem important to you, let us know. > > > The AstLinux Team > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel Michael http://www.mksolutions.info |
From: Lonnie A. <li...@lo...> - 2020-05-01 22:26:37
|
Hi David, > -- how would I tell? # show-union kd /mnt/kd is not on ASTURW. /mnt/kd is a separate partition. If you don't get that message, but rather a bunch of files you are using "combined". > When installing your newest build will this combined partition be un-combined No, the "combined" vs. "separate" format defines the partitioning of the disk, the new version will deal with whichever format is used. The filesystem performance would not be effected much by a "combined" format. Lonnie > On May 1, 2020, at 4:57 PM, David Kerr <da...@ke...> wrote: > > Lonnie, > Great work. I see lots of updates in github so you have been very busy. A question on the combined unionfs and /mnt/kd (as I think that is how I originally installed -- how would I tell?). When installing your newest build will this combined partition be un-combined and so our existing systems will have the same filesystem layout as a newly installed system? If not is there a way to manually make the change? > > Thanks > David > > > On Fri, May 1, 2020 at 2:27 PM Lonnie Abelbeck <li...@lo...> wrote: > Hi Dev minded folks, > > We have completed a significant transition from a kernel based unionfs filesystem to a FUSE (in-kernel + userland) unionfs filesystem. > > A key goal, any change needs to be backward compatible with pre-existing ASTURW partitions. > > With that, there are a few changes. All good changes, we hope. :-) > > The old kernel unionfs (via a kernel patch) has not been supported for years, we stretched it's usefulness to the linux kernel 3.16.x, but it is an anchor keeping us from moving to newer kernel versions. > > We considered a shiny new anchor :-), Aufs, which is also implemented via a kernel patch, more featured/bloated than the previous kernel unionfs and it has targeted support for only select kernel versions. > > Given the decision for AstLinux to only support 64-bit, x86_64 boards going forward, we wondered how the FUSE unionfs would perform. > > We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the PC Engines APU2 ... > > > Previously the full root '/' read-only mount was overlaid with the writable ASTURW partition to yield a read/write filesystem ... unionfs. > > While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, the filesystem speed suffered on the APU2, for example reloading the firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a lot of userland <-> kernel context switches. > > So then we thought, we really only need (mostly) the '/etc' and '/stat' directories to be overlaid to be made writable, which allowed moving the unionfs mount from the initrd into the /etc/rc startup script. That change returned the APU2 reload firewall performance to nearly it's previous time ... now the '/usr', '/lib', paths are very-slightly faster than before since there is no kernel unionfs overhead, but any '/etc' access is somewhat slower. Overall, no significant filesystem performance difference. > > So, FUSE unionfs for select directories is the new solution for Astlinux 1.3.10 and onward. No more kernel limiting anchors. > > This also better secures the filesystem, no accidental (or malicious) changing of '/usr', '/lib', binaries and scripts. > > The devil is in the details ... > > Some users may have partitioned their AstLinux installation by selecting "Combined Unionfs and /mnt/kd/ partition", so that is automatically supported when there is no separate ASTKD partition for '/mnt/kd'. > > Note: New installations now only support "Create separate Unionfs and /mnt/kd/ partitions", which has been the better choice. > > Some users may have installed binary blobs like codec_g729a.so to the Asterisk /usr/lib/asterisk/modules/ directory. The new default for this directory is to be read-only, but if the user adds ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be overlaid by the ASTURW partition, as it was before. This also creates an easy way to disable incompatible binary blobs from keeping Asterisk from starting. > > > Finally, some homework for those reading this (yes you!). To make sure we did not overlook anything, issue this command: > -- > show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e /asturw/mnt/kd > -- > > You might see > -- > /oldroot/mnt/asturw > /oldroot/mnt/asturw/.rnd > /oldroot/mnt/asturw/.asterisk_history > -- > which can be safely ignored. > > If you see some '/usr', '/lib', files that may seem important to you, let us know. > > > The AstLinux Team > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: David K. <da...@ke...> - 2020-05-01 21:58:04
|
Lonnie, Great work. I see lots of updates in github so you have been very busy. A question on the combined unionfs and /mnt/kd (as I think that is how I originally installed -- how would I tell?). When installing your newest build will this combined partition be un-combined and so our existing systems will have the same filesystem layout as a newly installed system? If not is there a way to manually make the change? Thanks David On Fri, May 1, 2020 at 2:27 PM Lonnie Abelbeck <li...@lo...> wrote: > Hi Dev minded folks, > > We have completed a significant transition from a kernel based unionfs > filesystem to a FUSE (in-kernel + userland) unionfs filesystem. > > A key goal, any change needs to be backward compatible with pre-existing > ASTURW partitions. > > With that, there are a few changes. All good changes, we hope. :-) > > The old kernel unionfs (via a kernel patch) has not been supported for > years, we stretched it's usefulness to the linux kernel 3.16.x, but it is > an anchor keeping us from moving to newer kernel versions. > > We considered a shiny new anchor :-), Aufs, which is also implemented via > a kernel patch, more featured/bloated than the previous kernel unionfs and > it has targeted support for only select kernel versions. > > Given the decision for AstLinux to only support 64-bit, x86_64 boards > going forward, we wondered how the FUSE unionfs would perform. > > We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the > PC Engines APU2 ... > > > Previously the full root '/' read-only mount was overlaid with the > writable ASTURW partition to yield a read/write filesystem ... unionfs. > > While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, > the filesystem speed suffered on the APU2, for example reloading the > firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a > lot of userland <-> kernel context switches. > > So then we thought, we really only need (mostly) the '/etc' and '/stat' > directories to be overlaid to be made writable, which allowed moving the > unionfs mount from the initrd into the /etc/rc startup script. That change > returned the APU2 reload firewall performance to nearly it's previous time > ... now the '/usr', '/lib', paths are very-slightly faster than before > since there is no kernel unionfs overhead, but any '/etc' access is > somewhat slower. Overall, no significant filesystem performance difference. > > So, FUSE unionfs for select directories is the new solution for Astlinux > 1.3.10 and onward. No more kernel limiting anchors. > > This also better secures the filesystem, no accidental (or malicious) > changing of '/usr', '/lib', binaries and scripts. > > The devil is in the details ... > > Some users may have partitioned their AstLinux installation by selecting > "Combined Unionfs and /mnt/kd/ partition", so that is automatically > supported when there is no separate ASTKD partition for '/mnt/kd'. > > Note: New installations now only support "Create separate Unionfs and > /mnt/kd/ partitions", which has been the better choice. > > Some users may have installed binary blobs like codec_g729a.so to the > Asterisk /usr/lib/asterisk/modules/ directory. The new default for this > directory is to be read-only, but if the user adds > ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be > overlaid by the ASTURW partition, as it was before. This also creates an > easy way to disable incompatible binary blobs from keeping Asterisk from > starting. > > > Finally, some homework for those reading this (yes you!). To make sure we > did not overlook anything, issue this command: > -- > show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e > /asturw/mnt/kd > -- > > You might see > -- > /oldroot/mnt/asturw > /oldroot/mnt/asturw/.rnd > /oldroot/mnt/asturw/.asterisk_history > -- > which can be safely ignored. > > If you see some '/usr', '/lib', files that may seem important to you, let > us know. > > > The AstLinux Team > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > |
From: Lonnie A. <li...@lo...> - 2020-05-01 18:27:48
|
Hi Dev minded folks, We have completed a significant transition from a kernel based unionfs filesystem to a FUSE (in-kernel + userland) unionfs filesystem. A key goal, any change needs to be backward compatible with pre-existing ASTURW partitions. With that, there are a few changes. All good changes, we hope. :-) The old kernel unionfs (via a kernel patch) has not been supported for years, we stretched it's usefulness to the linux kernel 3.16.x, but it is an anchor keeping us from moving to newer kernel versions. We considered a shiny new anchor :-), Aufs, which is also implemented via a kernel patch, more featured/bloated than the previous kernel unionfs and it has targeted support for only select kernel versions. Given the decision for AstLinux to only support 64-bit, x86_64 boards going forward, we wondered how the FUSE unionfs would perform. We gave FUSE unionfs a try on one of the lowest powered x86_64 boards, the PC Engines APU2 ... Previously the full root '/' read-only mount was overlaid with the writable ASTURW partition to yield a read/write filesystem ... unionfs. While FUSE unionfs worked fine as a replacement for a '/' unionfs overlay, the filesystem speed suffered on the APU2, for example reloading the firewall took 3x the time. BTW, the FUSE unionfs is slower since it adds a lot of userland <-> kernel context switches. So then we thought, we really only need (mostly) the '/etc' and '/stat' directories to be overlaid to be made writable, which allowed moving the unionfs mount from the initrd into the /etc/rc startup script. That change returned the APU2 reload firewall performance to nearly it's previous time ... now the '/usr', '/lib', paths are very-slightly faster than before since there is no kernel unionfs overhead, but any '/etc' access is somewhat slower. Overall, no significant filesystem performance difference. So, FUSE unionfs for select directories is the new solution for Astlinux 1.3.10 and onward. No more kernel limiting anchors. This also better secures the filesystem, no accidental (or malicious) changing of '/usr', '/lib', binaries and scripts. The devil is in the details ... Some users may have partitioned their AstLinux installation by selecting "Combined Unionfs and /mnt/kd/ partition", so that is automatically supported when there is no separate ASTKD partition for '/mnt/kd'. Note: New installations now only support "Create separate Unionfs and /mnt/kd/ partitions", which has been the better choice. Some users may have installed binary blobs like codec_g729a.so to the Asterisk /usr/lib/asterisk/modules/ directory. The new default for this directory is to be read-only, but if the user adds ASTERISK_RW_MODULES_DIR="yes" to their user.conf this directory will be overlaid by the ASTURW partition, as it was before. This also creates an easy way to disable incompatible binary blobs from keeping Asterisk from starting. Finally, some homework for those reading this (yes you!). To make sure we did not overlook anything, issue this command: -- show-union all | grep -v -e /asturw/etc -e /asturw/stat -e lost.found -e /asturw/mnt/kd -- You might see -- /oldroot/mnt/asturw /oldroot/mnt/asturw/.rnd /oldroot/mnt/asturw/.asterisk_history -- which can be safely ignored. If you see some '/usr', '/lib', files that may seem important to you, let us know. The AstLinux Team |
From: Michael K. <mic...@ip...> - 2020-04-22 11:20:50
|
Well its been running fine on Azure for a few days now with no issues. I did notice a couple of lost packets today but otherwise its been flawless. Its currently costing me around $2 AU per day which is not cheap but pretty good for a 99.9% Uptime SLA. Its currently on F1s which is 1CPU 2GRAM 4G Storage I will get something onto the site if anyone is interested. Regards Michael Knill On 16/4/20, 9:38 am, "Lonnie Abelbeck" <li...@lo...> wrote: Hi Michael, Great to hear ! If their serial setup is anything similar to Linode's https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode Search for "Out-of-Band Console Access" Basically a simple edit of the /etc/inittab file. > Also would you like me to write up an article for the build process? Is it pretty straight-forward? sure, any helpful documentation is great. Lonnie > On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: > > Hi Guys > > I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. > > One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? > > Also would you like me to write up an article for the build process? > > Thanks > > Regards > Michael Knill > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <mic...@ip...> - 2020-04-16 22:53:16
|
It was fine yesterday! Will let you know. Regards Michael Knill On 17/4/20, 8:09 am, "Lonnie Abelbeck" <li...@lo...> wrote: Hi Michael, > Yay serial console is now working. Good to hear. It will be interesting to see if the audio quality via Azure is on par with a local hardware box. Please report back after you have learned things. Lonnie > On Apr 15, 2020, at 8:48 PM, Michael Knill <mic...@ip...> wrote: > > Yay serial console is now working. > Thanks for that. I figured it would be easy. > > Regards > Michael Knill > > On 16/4/20, 9:38 am, "Lonnie Abelbeck" <li...@lo...> wrote: > > Hi Michael, > > Great to hear ! > > If their serial setup is anything similar to Linode's > > https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode > > Search for "Out-of-Band Console Access" > > Basically a simple edit of the /etc/inittab file. > > >> Also would you like me to write up an article for the build process? > > Is it pretty straight-forward? sure, any helpful documentation is great. > > Lonnie > > > >> On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: >> >> Hi Guys >> >> I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. >> >> One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? >> >> Also would you like me to write up an article for the build process? >> >> Thanks >> >> Regards >> Michael Knill >> _______________________________________________ >> Astlinux-devel mailing list >> Ast...@li... >> https://lists.sourceforge.net/lists/listinfo/astlinux-devel > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Lonnie A. <li...@lo...> - 2020-04-16 22:09:08
|
Hi Michael, > Yay serial console is now working. Good to hear. It will be interesting to see if the audio quality via Azure is on par with a local hardware box. Please report back after you have learned things. Lonnie > On Apr 15, 2020, at 8:48 PM, Michael Knill <mic...@ip...> wrote: > > Yay serial console is now working. > Thanks for that. I figured it would be easy. > > Regards > Michael Knill > > On 16/4/20, 9:38 am, "Lonnie Abelbeck" <li...@lo...> wrote: > > Hi Michael, > > Great to hear ! > > If their serial setup is anything similar to Linode's > > https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode > > Search for "Out-of-Band Console Access" > > Basically a simple edit of the /etc/inittab file. > > >> Also would you like me to write up an article for the build process? > > Is it pretty straight-forward? sure, any helpful documentation is great. > > Lonnie > > > >> On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: >> >> Hi Guys >> >> I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. >> >> One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? >> >> Also would you like me to write up an article for the build process? >> >> Thanks >> >> Regards >> Michael Knill >> _______________________________________________ >> Astlinux-devel mailing list >> Ast...@li... >> https://lists.sourceforge.net/lists/listinfo/astlinux-devel > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <mic...@ip...> - 2020-04-16 01:48:43
|
Yay serial console is now working. Thanks for that. I figured it would be easy. Regards Michael Knill On 16/4/20, 9:38 am, "Lonnie Abelbeck" <li...@lo...> wrote: Hi Michael, Great to hear ! If their serial setup is anything similar to Linode's https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode Search for "Out-of-Band Console Access" Basically a simple edit of the /etc/inittab file. > Also would you like me to write up an article for the build process? Is it pretty straight-forward? sure, any helpful documentation is great. Lonnie > On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: > > Hi Guys > > I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. > > One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? > > Also would you like me to write up an article for the build process? > > Thanks > > Regards > Michael Knill > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <mic...@ip...> - 2020-04-15 23:47:40
|
Thanks I will try this out. Its not that easy as you need to build in Hyper-V first, convert and upload but its ok. Regards Michael Knill On 16/4/20, 9:38 am, "Lonnie Abelbeck" <li...@lo...> wrote: Hi Michael, Great to hear ! If their serial setup is anything similar to Linode's https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode Search for "Out-of-Band Console Access" Basically a simple edit of the /etc/inittab file. > Also would you like me to write up an article for the build process? Is it pretty straight-forward? sure, any helpful documentation is great. Lonnie > On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: > > Hi Guys > > I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. > > One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? > > Also would you like me to write up an article for the build process? > > Thanks > > Regards > Michael Knill > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Lonnie A. <li...@lo...> - 2020-04-15 23:38:32
|
Hi Michael, Great to hear ! If their serial setup is anything similar to Linode's https://doc.astlinux-project.org/userdoc:hosted_guest_vm_linode Search for "Out-of-Band Console Access" Basically a simple edit of the /etc/inittab file. > Also would you like me to write up an article for the build process? Is it pretty straight-forward? sure, any helpful documentation is great. Lonnie > On Apr 15, 2020, at 6:25 PM, Michael Knill <mic...@ip...> wrote: > > Hi Guys > > I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. > > One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? > > Also would you like me to write up an article for the build process? > > Thanks > > Regards > Michael Knill > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Michael K. <mic...@ip...> - 2020-04-15 23:25:16
|
Hi Guys I have put my office Astlinux system on Azure to test it out. Seems to be working fine. I will let it run for a week or so to see how it goes and report back. One annoying thing is that Azure is looking for a serial console but I have installed the vm board type. Is there any way I can set this up? Also would you like me to write up an article for the build process? Thanks Regards Michael Knill |
From: Lonnie A. <li...@lo...> - 2020-04-14 16:51:26
|
> On Apr 14, 2020, at 11:46 AM, Armin Tüting <arm...@tu...> wrote: > > Am Samstag, den 11.04.2020, 09:27 -0500 schrieb Lonnie Abelbeck: > > [...] > >> Hi Armin, > Hi Lonnie, > >> I just tested the toolchain build on Debian 10.3 and it does not run on CentOS 7 >> -- >> $ /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc --version >> /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc) >> -- >> >> Sad, but not totally unexpected. Looks like each user will need to build the toolchain if we can't standardize on a build host environment. > Toolchain is working. The download of 'runnix-0.6.0a.tar.gz.sha1' is > failing as it missing from ' > https://s3.amazonaws.com/mirror.astlinux-project/runnix5' Yes, that is expected until we get a syslinux build version we like. I deleted runnix-0.6.0a.tar.gz.sha1 and added runnix-0.6.0b.tar.gz.sha1 on my private server https://abelbeck.com/lonnie/runnix6 The new 0.6.x runnix is still a work in progress. Mostly syslinux changes/issues. Lonnie |
From: Armin T. <arm...@tu...> - 2020-04-14 16:46:16
|
Am Samstag, den 11.04.2020, 09:27 -0500 schrieb Lonnie Abelbeck: [...] > Hi Armin, Hi Lonnie, > I just tested the toolchain build on Debian 10.3 and it does not run on CentOS 7 > -- > $ /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc --version > /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc) > -- > > Sad, but not totally unexpected. Looks like each user will need to build the toolchain if we can't standardize on a build host environment. Toolchain is working. The download of 'runnix-0.6.0a.tar.gz.sha1' is failing as it missing from ' https://s3.amazonaws.com/mirror.astlinux-project/runnix5' > > Lonnie > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel |
From: Lonnie A. <li...@lo...> - 2020-04-11 14:28:06
|
> On Apr 11, 2020, at 7:24 AM, Armin Tüting <arm...@tu...> wrote: > > Am Freitag, den 10.04.2020, 21:30 +0200 schrieb Michael Keuter: >>> Am 10.04.2020 um 21:15 schrieb Lonnie Abelbeck <li...@lo...>: >>> >>>> On Apr 9, 2020, at 3:52 PM, David Kerr <Da...@Ke...> wrote: >>>> >>>> I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. >>>> >>>> ... >>>> I have been testing a new crosstool-NG 1.24.0 version. >>>> >>>> I settled in on a good update: >>>> >>>> glibc 2.27, binutils 2.29.1, gcc 6.5.0 >>> >>> What are your thoughts on whether we should offer the toolchain as a binary download for a x86_64 host build environment ? >> >> If this would be easily possible, that would be great, cause it would simplify the initial build process a lot, and would save a lot of time. >> Maybe that move would attract more users to try to build own images in the future. >> >>> We can still show how to use crosstool-NG 1.24.0 and .config to generate the toolchain. >>> >>> I built the toolchain on a Debian 10.3 box, but it should run most anywhere given a glibc x86_64 host, correct ? >> >> I also think Debian 10 is a good choice. Either as small netinstall ISO or as template for hypervisors. >> I use an LXC container on Proxmox. > I think it's a bad idea to move to debian only though. At least we > should still consider CentOS7 at least. > If we move to Debian only the GA cycle needs to be considered as far as > security fixes are concerned. Hi Armin, I just tested the toolchain build on Debian 10.3 and it does not run on CentOS 7 -- $ /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc --version /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc: /lib64/libstdc++.so.6: version `CXXABI_1.3.9' not found (required by /home/dev/astlinux/x-tools-1.24.0-3.16-2.27/x86_64-unknown-linux-gnu/bin/x86_64-unknown-linux-gnu-gcc) -- Sad, but not totally unexpected. Looks like each user will need to build the toolchain if we can't standardize on a build host environment. Lonnie |
From: Armin T. <arm...@tu...> - 2020-04-11 13:11:19
|
Am Freitag, den 10.04.2020, 21:30 +0200 schrieb Michael Keuter: > > Am 10.04.2020 um 21:15 schrieb Lonnie Abelbeck <li...@lo...>: > > > > > On Apr 9, 2020, at 3:52 PM, David Kerr <Da...@Ke...> wrote: > > > > > > I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. > > > > > > ... > > > I have been testing a new crosstool-NG 1.24.0 version. > > > > > > I settled in on a good update: > > > > > > glibc 2.27, binutils 2.29.1, gcc 6.5.0 > > > > What are your thoughts on whether we should offer the toolchain as a binary download for a x86_64 host build environment ? > > If this would be easily possible, that would be great, cause it would simplify the initial build process a lot, and would save a lot of time. > Maybe that move would attract more users to try to build own images in the future. > > > We can still show how to use crosstool-NG 1.24.0 and .config to generate the toolchain. > > > > I built the toolchain on a Debian 10.3 box, but it should run most anywhere given a glibc x86_64 host, correct ? > > I also think Debian 10 is a good choice. Either as small netinstall ISO or as template for hypervisors. > I use an LXC container on Proxmox. I think it's a bad idea to move to debian only though. At least we should still consider CentOS7 at least. If we move to Debian only the GA cycle needs to be considered as far as security fixes are concerned. > > Lonnie > > Michael > > http://www.mksolutions.info > > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel -- Armin Tüting -- PGP-KeyID: 0xD0F0E6C2 +49-6185-898685 -- mailto:arm...@tu... () ascii ribbon campaign -- against html e-mail /\ www.asciiribbon.org -- against proprietary attachments |
From: Michael K. <li...@mk...> - 2020-04-10 19:30:35
|
> Am 10.04.2020 um 21:15 schrieb Lonnie Abelbeck <li...@lo...>: > >> On Apr 9, 2020, at 3:52 PM, David Kerr <Da...@Ke...> wrote: >> >> I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. >> >> ... > >> I have been testing a new crosstool-NG 1.24.0 version. >> >> I settled in on a good update: >> >> glibc 2.27, binutils 2.29.1, gcc 6.5.0 > > What are your thoughts on whether we should offer the toolchain as a binary download for a x86_64 host build environment ? If this would be easily possible, that would be great, cause it would simplify the initial build process a lot, and would save a lot of time. Maybe that move would attract more users to try to build own images in the future. > We can still show how to use crosstool-NG 1.24.0 and .config to generate the toolchain. > > I built the toolchain on a Debian 10.3 box, but it should run most anywhere given a glibc x86_64 host, correct ? I also think Debian 10 is a good choice. Either as small netinstall ISO or as template for hypervisors. I use an LXC container on Proxmox. > Lonnie Michael http://www.mksolutions.info |
From: Lonnie A. <li...@lo...> - 2020-04-10 19:15:43
|
> On Apr 9, 2020, at 3:52 PM, David Kerr <Da...@Ke...> wrote: > > I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. > > ... > I have been testing a new crosstool-NG 1.24.0 version. > > I settled in on a good update: > > glibc 2.27, binutils 2.29.1, gcc 6.5.0 What are your thoughts on whether we should offer the toolchain as a binary download for a x86_64 host build environment ? We can still show how to use crosstool-NG 1.24.0 and .config to generate the toolchain. I built the toolchain on a Debian 10.3 box, but it should run most anywhere given a glibc x86_64 host, correct ? Lonnie |
From: Michael K. <li...@mk...> - 2020-04-10 19:12:31
|
> Am 10.04.2020 um 20:56 schrieb David Kerr <da...@ke...>: > > Thanks Michael. I gave it a try and still have the same problem. I had problems building from that new source... it is missing "configure" and broke then. Rather than debug all that I just copied the new .c (and accompanying new .h) files into my existing build tree. So I got it to build fine, but still had the same error message on ZFS. There are a lot of changes between the 1.4.1 version in our build and the 1.4.2 version at github. I did not look to see what all they did. > > David Hi David, as I mentioned a while ago, I sometimes haves the same problem (using ZFS too), but then I just start the build process again, and it starts at the same point where it stopped and then finishes the build. Have you tried that too? And does it work for you as well? > > On Fri, Apr 10, 2020 at 6:06 AM Michael Keuter <li...@mk...> wrote: > > > > Am 09.04.2020 um 22:52 schrieb David Kerr <da...@ke...>: > > > > I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. > > > > I would also like us to look at what has changed inside the fs directory at buildroot and figure out if it is an easy upgrade for us. Specifically buildroot no longer uses genext2fs. That is where my builds fail with the ZFS filesystem and it looks like genext2fs has not been updated in 7 years. I don't know if what buildroot uses now fixes the problem or not, but that is where I would start -- but will that open a whole other mess? For example, I think you have to now set a filesystem size manually rather than have it auto calculated by the fs/ext2/genext2fs.sh script (which is gone now). > > > > Maybe we should fork a astlinux-next project on github and do a lot of experimenting there before messing too much with the mainline. > > > > David > > BTW: Here is an updated version of "genext2fs": > https://github.com/bestouff/genext2fs > > Maybe this fixes your ZFS issues. > > Michael Michael http://www.mksolutions.info |
From: David K. <da...@ke...> - 2020-04-10 18:57:05
|
Thanks Michael. I gave it a try and still have the same problem. I had problems building from that new source... it is missing "configure" and broke then. Rather than debug all that I just copied the new .c (and accompanying new .h) files into my existing build tree. So I got it to build fine, but still had the same error message on ZFS. There are a lot of changes between the 1.4.1 version in our build and the 1.4.2 version at github. I did not look to see what all they did. David On Fri, Apr 10, 2020 at 6:06 AM Michael Keuter <li...@mk...> wrote: > > > > Am 09.04.2020 um 22:52 schrieb David Kerr <da...@ke...>: > > > > I think you know my opinion... I'm always one that likes to keep > up-to-date so I definitely support the move. I am not concerned about > filesize increases, though it is definitely something to highlight and find > out if other users care. I just don't know if this runs the risk of not > working on very old h/w. > > > > I would also like us to look at what has changed inside the fs directory > at buildroot and figure out if it is an easy upgrade for us. Specifically > buildroot no longer uses genext2fs. That is where my builds fail with the > ZFS filesystem and it looks like genext2fs has not been updated in 7 > years. I don't know if what buildroot uses now fixes the problem or not, > but that is where I would start -- but will that open a whole other mess? > For example, I think you have to now set a filesystem size manually rather > than have it auto calculated by the fs/ext2/genext2fs.sh script (which is > gone now). > > > > Maybe we should fork a astlinux-next project on github and do a lot of > experimenting there before messing too much with the mainline. > > > > David > > BTW: Here is an updated version of "genext2fs": > https://github.com/bestouff/genext2fs > > Maybe this fixes your ZFS issues. > > Michael > > http://www.mksolutions.info > > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > |
From: Michael K. <li...@mk...> - 2020-04-10 10:06:23
|
> Am 09.04.2020 um 22:52 schrieb David Kerr <da...@ke...>: > > I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. > > I would also like us to look at what has changed inside the fs directory at buildroot and figure out if it is an easy upgrade for us. Specifically buildroot no longer uses genext2fs. That is where my builds fail with the ZFS filesystem and it looks like genext2fs has not been updated in 7 years. I don't know if what buildroot uses now fixes the problem or not, but that is where I would start -- but will that open a whole other mess? For example, I think you have to now set a filesystem size manually rather than have it auto calculated by the fs/ext2/genext2fs.sh script (which is gone now). > > Maybe we should fork a astlinux-next project on github and do a lot of experimenting there before messing too much with the mainline. > > David BTW: Here is an updated version of "genext2fs": https://github.com/bestouff/genext2fs Maybe this fixes your ZFS issues. Michael http://www.mksolutions.info |
From: David K. <da...@ke...> - 2020-04-09 20:52:50
|
I think you know my opinion... I'm always one that likes to keep up-to-date so I definitely support the move. I am not concerned about filesize increases, though it is definitely something to highlight and find out if other users care. I just don't know if this runs the risk of not working on very old h/w. I would also like us to look at what has changed inside the fs directory at buildroot and figure out if it is an easy upgrade for us. Specifically buildroot no longer uses genext2fs. That is where my builds fail with the ZFS filesystem and it looks like genext2fs has not been updated in 7 years. I don't know if what buildroot uses now fixes the problem or not, but that is where I would start -- but will that open a whole other mess? For example, I think you have to now set a filesystem size manually rather than have it auto calculated by the fs/ext2/genext2fs.sh script (which is gone now). Maybe we should fork a astlinux-next project on github and do a lot of experimenting there before messing too much with the mainline. David On Thu, Apr 9, 2020 at 11:04 AM Lonnie Abelbeck <li...@lo...> wrote: > Regarding - Update the toolchain using a newer gcc and glibc > > I have been testing a new crosstool-NG 1.24.0 version. > > I settled in on a good update: > > glibc 2.27, binutils 2.29.1, gcc 6.5.0 > > After a few package tweaks, and patches for the new glibc, it builds and > runs. > > But, not after a day or so of frustration, the initrd with the new glibc > crashed when called by the kernel. I tested many toolchain build > combinations, using versions much like our current it worked, but any bump > in the glibc and gcc it crashed. > > So, if you are playing along at home, here are some clues ... > > Looking at initrd sizes: > > Current: > eglibc 2.18, binutils 2.22, gcc 4.8.3 > initrd.img size: 2658045 > > Testing: > glibc 2.27, binutils 2.29.1, gcc 6.5.0 > initrd.img size: 4001609 > > > OK, I'll end the suspense, the problem is the kernel has a limit of the > ramdisk size allowed, which needed to be increased. > > This kernel change: > -CONFIG_BLK_DEV_RAM_SIZE=8192 > +CONFIG_BLK_DEV_RAM_SIZE=16384 > > Or, passing ramdisk_size=16384 via the KCMD in the .run.conf > > allows the new initrd to run. > > The resulting astlinux image also runs, though bigger in size: > > Current: > eglibc 2.18, binutils 2.22, gcc 4.8.3 > astlinux.tar.gz size: 50851434 > > Testing: > glibc 2.27, binutils 2.29.1, gcc 6.5.0 > astlinux.tar.gz size: 53732464 > > Note that the astlinux.tar.gz contains two copies of glibc, one in the > initrd and one in the run-image. > > More size info: > > Current: > eglibc 2.18, binutils 2.22, gcc 4.8.3 > # ls -l /lib/*so > -rwxr-xr-x 1 root root 120224 Mar 31 17:00 /lib/ld-2.18.so > -rwxr-xr-x 1 root root 1525336 Mar 31 17:00 /lib/libc-2.18.so > -rwxr-xr-x 1 root root 30856 Mar 31 17:00 /lib/ > libcrypt-2.18.so > -rwxr-xr-x 1 root root 14528 Mar 31 17:00 /lib/ > libdl-2.18.so > -rwxr-xr-x 1 root root 993504 Mar 31 17:00 /lib/libm-2.18.so > -rwxr-xr-x 1 root root 76608 Mar 31 17:00 /lib/ > libnsl-2.18.so > -rwxr-xr-x 1 root root 22728 Mar 31 17:00 /lib/ > libnss_dns-2.18.so > -rwxr-xr-x 1 root root 43384 Mar 31 17:00 /lib/ > libnss_files-2.18.so > -rwxr-xr-x 1 root root 140676 Mar 31 15:01 /lib/ > libpthread-2.18.so > -rwxr-xr-x 1 root root 76400 Mar 31 17:00 /lib/ > libresolv-2.18.so > -rwxr-xr-x 1 root root 31512 Mar 31 17:00 /lib/ > librt-2.18.so > -rwxr-xr-x 1 root root 10472 Mar 31 17:00 /lib/ > libutil-2.18.so > > Testing: > glibc 2.27, binutils 2.29.1, gcc 6.5.0 > # ls -l /lib/*so > -rwxr-xr-x 1 root root 157352 Apr 8 23:06 /lib/ld-2.27.so > -rwxr-xr-x 1 root root 1767720 Apr 8 23:06 /lib/libc-2.27.so > -rwxr-xr-x 1 root root 39064 Apr 8 23:06 /lib/ > libcrypt-2.27.so > -rwxr-xr-x 1 root root 14584 Apr 8 23:06 /lib/ > libdl-2.27.so > -rwxr-xr-x 1 root root 1648920 Apr 8 23:06 /lib/libm-2.27.so > -rwxr-xr-x 1 root root 84752 Apr 8 23:06 /lib/ > libnsl-2.27.so > -rwxr-xr-x 1 root root 22712 Apr 8 23:06 /lib/ > libnss_dns-2.27.so > -rwxr-xr-x 1 root root 47432 Apr 8 23:06 /lib/ > libnss_files-2.27.so > -rwxr-xr-x 1 root root 2389056 Apr 8 21:54 /lib/ > libpthread-2.27.so > -rwxr-xr-x 1 root root 80528 Apr 8 23:06 /lib/ > libresolv-2.27.so > -rwxr-xr-x 1 root root 31552 Apr 8 23:06 /lib/ > librt-2.27.so > -rwxr-xr-x 1 root root 10464 Apr 8 23:06 /lib/ > libutil-2.27.so > > You might ask why use glibc 2.27 vs. glibc 2.28 ? OpenWrt uses 2.27, as > many compatibility changes occur in 2.28 which would require additional > package patches to fix. It seems reasonable to stick with glibc 2.27 for > compatibility with older packages. > > Finally, do you see the potential problem here ? If you said > "revert-to-previous to 1.3.8 (or earlier) will not work" you would be > correct. Since we only have one initrd.img file (not a separate initrd.img > for each run-image, a revert-to-previous would use the newer initrd.img > with the older kernel config ... crash. Possibly we could create a > temporary fix for that by automatically passing ramdisk_size=16384 to the > kernel command line, at least with kernel-reboot. > > Lonnie > > > > > > _______________________________________________ > Astlinux-devel mailing list > Ast...@li... > https://lists.sourceforge.net/lists/listinfo/astlinux-devel > |
From: Lonnie A. <li...@lo...> - 2020-04-09 15:03:46
|
Regarding - Update the toolchain using a newer gcc and glibc I have been testing a new crosstool-NG 1.24.0 version. I settled in on a good update: glibc 2.27, binutils 2.29.1, gcc 6.5.0 After a few package tweaks, and patches for the new glibc, it builds and runs. But, not after a day or so of frustration, the initrd with the new glibc crashed when called by the kernel. I tested many toolchain build combinations, using versions much like our current it worked, but any bump in the glibc and gcc it crashed. So, if you are playing along at home, here are some clues ... Looking at initrd sizes: Current: eglibc 2.18, binutils 2.22, gcc 4.8.3 initrd.img size: 2658045 Testing: glibc 2.27, binutils 2.29.1, gcc 6.5.0 initrd.img size: 4001609 OK, I'll end the suspense, the problem is the kernel has a limit of the ramdisk size allowed, which needed to be increased. This kernel change: -CONFIG_BLK_DEV_RAM_SIZE=8192 +CONFIG_BLK_DEV_RAM_SIZE=16384 Or, passing ramdisk_size=16384 via the KCMD in the .run.conf allows the new initrd to run. The resulting astlinux image also runs, though bigger in size: Current: eglibc 2.18, binutils 2.22, gcc 4.8.3 astlinux.tar.gz size: 50851434 Testing: glibc 2.27, binutils 2.29.1, gcc 6.5.0 astlinux.tar.gz size: 53732464 Note that the astlinux.tar.gz contains two copies of glibc, one in the initrd and one in the run-image. More size info: Current: eglibc 2.18, binutils 2.22, gcc 4.8.3 # ls -l /lib/*so -rwxr-xr-x 1 root root 120224 Mar 31 17:00 /lib/ld-2.18.so -rwxr-xr-x 1 root root 1525336 Mar 31 17:00 /lib/libc-2.18.so -rwxr-xr-x 1 root root 30856 Mar 31 17:00 /lib/libcrypt-2.18.so -rwxr-xr-x 1 root root 14528 Mar 31 17:00 /lib/libdl-2.18.so -rwxr-xr-x 1 root root 993504 Mar 31 17:00 /lib/libm-2.18.so -rwxr-xr-x 1 root root 76608 Mar 31 17:00 /lib/libnsl-2.18.so -rwxr-xr-x 1 root root 22728 Mar 31 17:00 /lib/libnss_dns-2.18.so -rwxr-xr-x 1 root root 43384 Mar 31 17:00 /lib/libnss_files-2.18.so -rwxr-xr-x 1 root root 140676 Mar 31 15:01 /lib/libpthread-2.18.so -rwxr-xr-x 1 root root 76400 Mar 31 17:00 /lib/libresolv-2.18.so -rwxr-xr-x 1 root root 31512 Mar 31 17:00 /lib/librt-2.18.so -rwxr-xr-x 1 root root 10472 Mar 31 17:00 /lib/libutil-2.18.so Testing: glibc 2.27, binutils 2.29.1, gcc 6.5.0 # ls -l /lib/*so -rwxr-xr-x 1 root root 157352 Apr 8 23:06 /lib/ld-2.27.so -rwxr-xr-x 1 root root 1767720 Apr 8 23:06 /lib/libc-2.27.so -rwxr-xr-x 1 root root 39064 Apr 8 23:06 /lib/libcrypt-2.27.so -rwxr-xr-x 1 root root 14584 Apr 8 23:06 /lib/libdl-2.27.so -rwxr-xr-x 1 root root 1648920 Apr 8 23:06 /lib/libm-2.27.so -rwxr-xr-x 1 root root 84752 Apr 8 23:06 /lib/libnsl-2.27.so -rwxr-xr-x 1 root root 22712 Apr 8 23:06 /lib/libnss_dns-2.27.so -rwxr-xr-x 1 root root 47432 Apr 8 23:06 /lib/libnss_files-2.27.so -rwxr-xr-x 1 root root 2389056 Apr 8 21:54 /lib/libpthread-2.27.so -rwxr-xr-x 1 root root 80528 Apr 8 23:06 /lib/libresolv-2.27.so -rwxr-xr-x 1 root root 31552 Apr 8 23:06 /lib/librt-2.27.so -rwxr-xr-x 1 root root 10464 Apr 8 23:06 /lib/libutil-2.27.so You might ask why use glibc 2.27 vs. glibc 2.28 ? OpenWrt uses 2.27, as many compatibility changes occur in 2.28 which would require additional package patches to fix. It seems reasonable to stick with glibc 2.27 for compatibility with older packages. Finally, do you see the potential problem here ? If you said "revert-to-previous to 1.3.8 (or earlier) will not work" you would be correct. Since we only have one initrd.img file (not a separate initrd.img for each run-image, a revert-to-previous would use the newer initrd.img with the older kernel config ... crash. Possibly we could create a temporary fix for that by automatically passing ramdisk_size=16384 to the kernel command line, at least with kernel-reboot. Lonnie |
From: Michael K. <mic...@ip...> - 2020-04-08 00:09:07
|
I have a Mango. Cool little Wireguard VPN router running OpenWRT. Regards Michael Knill On 8/4/20, 12:42 am, "Michael Keuter" <li...@mk...> wrote: > Am 07.04.2020 um 14:47 schrieb David Kerr <da...@ke...>: > > My interest was triggered recently when I came across NanoPi R2S (http://wiki.friendlyarm.com/wiki/index.php/NanoPi_R2S and https://www.friendlyarm.com/index.php?route=product/product&path=69&product_id=282) $25 with case, 2x Gbps 1GB RAM. Of course no sooner had word got out than they sold out. I almost did an impulse buy but hesitated. I think you can get from China for a little more money through AliExpress. You can also google for "<pick a fruit> pi dual ethernet". Try orange or banana :-) > > David OK, this is interesting. It looks similar to the GL.inet Mango :-). https://www.gl-inet.com/products/gl-mt300n-v2/ The Mango has only 100MBit/s Ethernet, but supports OpenVPN + WireGuard client and server out of the box! I installed one of the Creta boxes as WireGuard server shortly: https://www.gl-inet.com/products/gl-ar750/ > On Tue, Apr 7, 2020 at 5:44 AM Michael Keuter <li...@mk...> wrote: > > > > Am 06.04.2020 um 21:57 schrieb David Kerr <da...@ke...>: > > > > Kernel 4.19 has been selected by the linux foundation for Super LTS which means that it should have some support structure for many many years. I'll back off my previous proposal to adopt an off-the-shelf underlying platform... the points about piecemeal package updates potentially causing stability issues is valid. > > > > I'll add one more potentially controversial suggestion... in addition to x86_64 how about ARM64 as well? > > Hi David, > do you have any specific ARM64 device with multiple gigabit NICs in mind? > I couldn't find boards either from Ubiquiti (MIPS) or Mikrotik (MIPS + ARM32). > If you find some that are significantly cheaper than a Qotom or APU2 then please report. > > > For the build system/tools I think we should update to the current buildroot. While we have been keeping individual package versions up-to-date, we have not been keeping the build system up-to-date (crosstools, config files, etc.) And it is this that is causing problems when underlying build hosts are updated. And I am NOT in favor of mandating and/or supporting only one host OS for the build system, be it CentOS, Ubuntu, whatever. > > > > David. > > > > > > > > On Thu, Apr 2, 2020 at 2:16 PM Lonnie Abelbeck <li...@lo...> wrote: > > Yes, but Kernel 4.19.x does not yet have a LTS maintainer ... hopefully Ben Hutchings, but unknown as of yet. > > > > Debian 10 uses Kernel 4.19.x and has a LTS 2024 date. > > > > Debian 11 will use Kernel 5.4.x, but we would need to wait a year or two before that kernel is tested enough for production. > > > > CentOS 8 uses 4.18.x, which is maintained within the RHEL/CentOS project ... until 2029 ! > > > > Lonnie > > > > > > > > > On Apr 2, 2020, at 12:01 PM, Michael Keuter <li...@mk...> wrote: > > > > > > Only the latest 4.x + 5 kernels are supported. > > > So 4.14 is out. > > > > > > http://aufs.sourceforge.net/ > > > > > > Sent from a mobile device. > > > > > > Michael Keuter > > > > > > Anfang der weitergeleiteten Nachricht: > > > > > >> Von: Lonnie Abelbeck <li...@lo...> > > >> Datum: 2. April 2020 um 17:40:23 MESZ > > >> An: AstLinux Developers Mailing List <ast...@li...> > > >> Betreff: Aw: [Astlinux-devel] Astlinux base system > > >> Antwort an: AstLinux Developers Mailing List <ast...@li...> > > >> > > >> My short list for AstLinux 1.4.x would be: > > >> > > >> - Kernel bump to 4.x > > >> - Switch from unionfs to Aufs > > >> - Support x86_64 only > > >> - Update the toolchain using a newer gcc and glibc > > >> - RUNNIX runnix-0.6.0 would be x86_64 only > > >> > > >> As for Buildroot, our version is quite up to date for x86_64 and the packages we use. We have packages and fixes that are not upstream. > > >> > > >> As for Linux build systems, I would prefer to pick a common basic distro version, CentOS or Debian, and instruct users to only use that version in a VM for building AstLinux. This would add a level of consistency for the HOST and simplicity for the documentation. There is no extra-credit for making our build system work with any distro. > > >> > > >> As for toolchains, it would be ideal to find a supported, trusted, x86_64 pre-built toolchain we could use (like a package) with the kernel version we decide to use. Crosstool-NG is also an option, which if we use a common build HOST we could possibly create our own x86_64 pre-built toolchain which could be downloaded instead of rebuilt for every build system install. > > >> > > >> Finally, the simultaneous equations that need to be solved are, Kernel LTS version where Aufs is actively supported and works with a toolchain we like. > > >> > > >> Lonnie > > >> > > >> > > >> > > >>> On Apr 2, 2020, at 8:46 AM, David Kerr <Da...@Ke...> wrote: > > >>> > > >>> Changed subject to be more relevant to the discussion. > > >>> > > >>> I thought more about this and realized that my itch is not really that we should use a standard distribution, but that we should be thinking about bringing AstLinux more up-to-date and discussing how to go about it. Yes the kernel 3.16.x works fine and will do for some time. But the day will come when we must move. In addition our base system is built from a very old version of buildroot, toolchains, libraries, etc. and we run into issues as underlying hosts change... mostly simple things like version checks failing and we have been able to work around them. But it is getting harder... the most recent one being when I tried the ZFS file system on Ubuntu 19.10. > > >>> > > >>> Maybe if we changed how we build AstLinux to start from a standard/stable version of buildroot (rather than essentially copying all of buildroot into our own source tree). I'm not even sure what I mean by that, but I think we should look for a way to stay closer aligned and be able to easily update with Buildroot versions. > > >>> > > >>> David. > > >>> > > >>> On Mon, Mar 30, 2020 at 4:23 PM Michael Knill <mic...@ip...> wrote: > > >>> After using OpenWRT for a little while, I decided to abandon its use for two reasons, 1) Wifi driver support but also 2) packages. It was so difficult to maintain a standard image across all my routers and I never knew what package was going to affect what and what would break when I upgraded and then how I would revert. Although I hate Mikrotik's UI and command structure, I decided to suck it up and move to them because of a single firmware image amongst other things. > > >>> > > >>> Although I am not that knowledgeable in the area, I believe the stability of Astlinux is because of the minimalist approach taken to what is included and enabled. > > >>> I therefore really like the current approach and certainly would not want to move to 'a series of entangled package upgrades'. > > >>> > > >>> Thanks David for asking these questions as its really important that we understand what we are doing and why. > > >>> > > >>> Regards > > >>> Michael Knill > > >>> > > >>> On 31/3/20, 3:11 am, "Lonnie Abelbeck" <li...@lo...> wrote: > > >>> > > >>> Hi David, et.al, > > >>> > > >>> Brainstorming ideas are always appreciated. > > >>> > > >>> Though I think our "AstLinux Principles" still apply today, as much as ever: > > >>> > > >>> AstLinux Principles: > > >>> https://www.astlinux-project.org/about.html > > >>> > > >>> Our current 3.16.x kernel has been getting backported fixes from upstream [1] for quite some time by Ben Hutchings ... some would argue that the current 3.16.x kernel has "all the fixes but none of the new bugs" :-) > > >>> > > >>> Correct me if you disagree, but since we use i586/x86_64 architecture much of the newer kernel hardware support does not apply to us. This is even more true for AstLinux running as VM Guests. > > >>> > > >>> Additionally, a big sticking point upgrading beyond the 3.16.x kernel is our current unionfs kernel patches are no longer supported in 4.x. Looks like we will need to switch to Aufs, which we have been giving some thought, making it backward compatible is a concern. > > >>> > > >>> Some might argue we should stick with a rock-solid 3.16.x kernel for another year or so, others might suggest 4.x (unclear which is best) is warranted. I think 5.x is too new to consider IMO, not enough testing. > > >>> > > >>> Finally, packages vs. firmware-image. I'm convinced the firmware-image approach is ideal for an appliance like AstLinux. I've played with OpenWrt on an EdgeRouter-X, and found the package upgrade approach leaves much to be desired compared with a firmware-image approach. > > >>> > > >>> Please discuss ... > > >>> > > >>> Lonnie > > >>> > > >>> [1] https://git.kernel.org/pub/scm/linux/kernel/git/bwh/linux-stable-queue.git/log/queue-3.16 > > >>> > > >>> > > >>> > > >>> > > >>>> On Mar 30, 2020, at 10:13 AM, David Kerr <da...@ke...> wrote: > > >>>> > > >>>> Moving to devel list. > > >>>> > > >>>> This once again prompts the question as to what to do about AstLinux kernel version. It feels that we are getting further and further behind and should maybe even skip 4.x and go straight to a 5.x. > > >>>> > > >>>> Or... get out of the business of providing the low level OS components and develop on top of a standard "lightweight" distribution. When AstLinux started up there was a need for a tight, small, integrated system that would work on typical network gateway hardware of the time. But today's typical network gateway hardware is very capable of running a standard linux distribution. And if we moved all the AstLinux-unique features over to installable packages I think we would benefit greatly. > > >>>> > > >>>> Just thought I would throw this out there and see what people think. > > >>>> > > >>>> David. > > >>>> > > >>>> On Mon, Mar 30, 2020 at 10:36 AM Lonnie Abelbeck <li...@lo...> wrote: > > >>>> Greetings, > > >>>> > > >>>> FYI, I forwarded (below) a note from the WireGuard mailing list. > > >>>> > > >>>> My favorite VPN type (as well as many of you reading this) has achieved a major milestone. Additionally, an outside security audit of WireGuard has been performed. > > >>>> > > >>>> For those previously concerned about the non-official status of WireGuard, those concerns should now be minimized. > > >>>> > > >>>> Lonnie > > >>>> > > >>>> > > >>>> ================ > > >>>> Begin forwarded message: > > >>>> > > >>>> From: "Jason A. Donenfeld" <Ja...@zx...> > > >>>> Subject: [ANNOUNCE] WireGuard 1.0.0 for Linux 5.6 Released > > >>>> Date: March 29, 2020 at 9:16:43 PM CDT > > >>>> To: WireGuard mailing list <wir...@li...> > > >>>> > > >>>> Hi folks, > > >>>> > > >>>> Earlier this evening, Linus released [1] Linus 5.6, which contains our > > >>>> first release of WireGuard. This is quite exciting. It means that > > >>>> kernels from here on out will have WireGuard built-in by default. And > > >>>> for those of you who were scared away prior by the "dOnT uSe tHiS > > >>>> k0de!!1!" warnings everywhere, you now have something more stable to > > >>>> work with. > > >>>> > > >>>> The last several weeks of 5.6 development and stabilization have been > > >>>> exciting, with our codebase undergoing a quick security audit [3], and > > >>>> some real headway in terms of getting into distributions. > > >>>> > > >>>> We'll also continue to maintain our wireguard-linux-compat [2] > > >>>> backports repo for older kernels. On the backports front, WireGuard > > >>>> was backported to Ubuntu 20.04 (via wireguard-linux-compat) [4] and > > >>>> Debian Buster (via a real backport to 5.5.y) [5]. I'm also maintaining > > >>>> real backports, not via the compat layer, to 5.4.y [6] and 5.5.y [7], > > >>>> and we'll see where those wind up; 5.4.y is an LTS release. > > >>>> > > >>>> Meanwhile, the usual up-to-date distributions like Arch, Gentoo, and > > >>>> Fedora 32 will be getting WireGuard automatically by virtue of having > > >>>> 5.6, and I expect these to increase in number over time. > > >>>> > > >>>> Enjoy! > > >>>> Jason > > >>>> > > >>>> > > >>>> [1] https://lore.kernel.org/lkml/CAHk-=wi9...@ma.../ > > >>>> [2] https://git.zx2c4.com/wireguard-linux-compat/ > > >>>> [3] https://lore.kernel.org/netdev/202...@zx.../ > > >>>> [4] https://git.launchpad.net/~ubuntu-kernel/ubuntu/+source/linux/+git/focal/tree/debian/dkms-versions?h=master-next > > >>>> [5] https://salsa.debian.org/kernel-team/linux/-/tree/master/debian%2Fpatches%2Ffeatures%2Fall%2Fwireguard > > >>>> [6] https://git.zx2c4.com/wireguard-linux/log/?h=backport-5.4.y > > >>>> [7] https://git.zx2c4.com/wireguard-linux/log/?h=backport-5.5.y > Michael > > http://www.mksolutions.info Michael http://www.mksolutions.info _______________________________________________ Astlinux-devel mailing list Ast...@li... https://lists.sourceforge.net/lists/listinfo/astlinux-devel |