You can subscribe to this list here.
| 2007 |
Jan
|
Feb
|
Mar
(10) |
Apr
(7) |
May
(6) |
Jun
(13) |
Jul
(4) |
Aug
|
Sep
|
Oct
(17) |
Nov
(5) |
Dec
(4) |
|---|---|---|---|---|---|---|---|---|---|---|---|---|
| 2008 |
Jan
(2) |
Feb
|
Mar
|
Apr
(4) |
May
(2) |
Jun
(7) |
Jul
(10) |
Aug
(4) |
Sep
(14) |
Oct
|
Nov
(1) |
Dec
(7) |
| 2009 |
Jan
(17) |
Feb
(20) |
Mar
(11) |
Apr
(14) |
May
(8) |
Jun
(3) |
Jul
(22) |
Aug
(9) |
Sep
(8) |
Oct
(6) |
Nov
(4) |
Dec
(8) |
| 2010 |
Jan
(17) |
Feb
(9) |
Mar
(15) |
Apr
(24) |
May
(14) |
Jun
(1) |
Jul
(21) |
Aug
(6) |
Sep
(2) |
Oct
(2) |
Nov
(6) |
Dec
(9) |
| 2011 |
Jan
(11) |
Feb
(1) |
Mar
(3) |
Apr
(4) |
May
|
Jun
|
Jul
(2) |
Aug
(3) |
Sep
(2) |
Oct
(29) |
Nov
(1) |
Dec
(1) |
| 2012 |
Jan
(1) |
Feb
(1) |
Mar
|
Apr
(13) |
May
(4) |
Jun
(9) |
Jul
(2) |
Aug
(2) |
Sep
(1) |
Oct
(2) |
Nov
(11) |
Dec
(4) |
| 2013 |
Jan
(2) |
Feb
(2) |
Mar
(4) |
Apr
(13) |
May
(4) |
Jun
|
Jul
|
Aug
(1) |
Sep
(5) |
Oct
(3) |
Nov
(1) |
Dec
(3) |
| 2014 |
Jan
|
Feb
(3) |
Mar
(3) |
Apr
(6) |
May
(8) |
Jun
|
Jul
|
Aug
(1) |
Sep
(1) |
Oct
(3) |
Nov
(14) |
Dec
(8) |
| 2015 |
Jan
(16) |
Feb
(30) |
Mar
(20) |
Apr
(5) |
May
(33) |
Jun
(11) |
Jul
(15) |
Aug
(91) |
Sep
(23) |
Oct
(10) |
Nov
(7) |
Dec
(9) |
| 2016 |
Jan
(22) |
Feb
(8) |
Mar
(6) |
Apr
(23) |
May
(38) |
Jun
(29) |
Jul
(43) |
Aug
(43) |
Sep
(18) |
Oct
(8) |
Nov
(2) |
Dec
(25) |
| 2017 |
Jan
(38) |
Feb
(3) |
Mar
(1) |
Apr
|
May
(18) |
Jun
(2) |
Jul
(16) |
Aug
(2) |
Sep
|
Oct
(1) |
Nov
(4) |
Dec
(14) |
| 2018 |
Jan
(15) |
Feb
(2) |
Mar
(3) |
Apr
(5) |
May
(8) |
Jun
(12) |
Jul
(19) |
Aug
(16) |
Sep
(8) |
Oct
(13) |
Nov
(15) |
Dec
(10) |
| 2019 |
Jan
(9) |
Feb
(3) |
Mar
|
Apr
(2) |
May
|
Jun
(1) |
Jul
|
Aug
(5) |
Sep
(5) |
Oct
(12) |
Nov
(4) |
Dec
|
| 2020 |
Jan
(2) |
Feb
(6) |
Mar
|
Apr
|
May
(11) |
Jun
(1) |
Jul
(3) |
Aug
(22) |
Sep
(8) |
Oct
|
Nov
(2) |
Dec
|
| 2021 |
Jan
(7) |
Feb
|
Mar
(19) |
Apr
|
May
(10) |
Jun
(5) |
Jul
(7) |
Aug
(3) |
Sep
(1) |
Oct
|
Nov
(10) |
Dec
(4) |
| 2022 |
Jan
(17) |
Feb
|
Mar
(7) |
Apr
(3) |
May
|
Jun
(1) |
Jul
(3) |
Aug
|
Sep
|
Oct
(6) |
Nov
|
Dec
|
| 2023 |
Jan
|
Feb
(5) |
Mar
(1) |
Apr
(3) |
May
|
Jun
(3) |
Jul
(2) |
Aug
|
Sep
|
Oct
|
Nov
(6) |
Dec
|
| 2024 |
Jan
|
Feb
|
Mar
|
Apr
|
May
(3) |
Jun
|
Jul
|
Aug
|
Sep
|
Oct
|
Nov
|
Dec
|
| 2025 |
Jan
|
Feb
|
Mar
(15) |
Apr
(8) |
May
(10) |
Jun
|
Jul
|
Aug
|
Sep
(6) |
Oct
|
Nov
|
Dec
|
|
From: Christos C. <ch...@cr...> - 2020-08-31 19:41:13
|
> I can test pf and ipfw. If need be, they can also be separate backends > (currently they share sshg-fw.in, which does not need to be the case). ipfw adds 2326 IPs / second (one by one) in a ipfw table using Intel i7-6700 CPU. |
|
From: Kevin Z. <kev...@gm...> - 2020-08-31 18:36:13
|
On 8/31/20 5:12 AM, Christopher Engelhard wrote: > I did some more testing over the weekend with the nftables & iptables > backends as well. I didn't extensively test ipset, but it can add 6000 > IPs in 5s at maximum throughput. The guest runs Fedora, where firewalld > uses nftables as the backend, so the nft-sets backend is the relevant > comparison. This comparison is great work and gives us pretty useful information. Thank you for doing this research. > Test 1, maximum throughput: > Just dumping the list into the backend via 'cat iplist | <backend>': > > - 2.4.1 backends: > firewalld: ~1400s to complete, 50% CPU (i.e. 1 core maxed out) > nft-sets: ~50s to complete, 50% CPU > iptables: ~22s to complete, 50% CPU > ipset: ~ 5s to complete, 50% CPU > - 1-second-batch: (i.e. one command in this case) > firewalld: ~35s to complete > nft-sets: 1-2s to complete > iptables: 1-2s to complete Just to be clear, does "one-second batch" collect inputs from stdin over one second, then issues one large command to the backend? > Test 2, CPU load at 10 blocks/second: > Dumping the list via 'while read line; do echo $line; sleep 0.1; done < > iplist' | <backend>: > - 2.4.1 backends: > firewalld: ~1400s to complete, 50% CPU (i.e. 1 core) > nft-sets: ~640s to complete, ~22% CPU > iptables: ~640s to complete, ~12% CPU > - 1-second-batch: > firewalld: ~640s to complete, ~27% CPU > nft-sets: ~640s to complete ~4% CPU > iptables: ~640s to complete ~4% CPU If your list is 6000 addresses, and attacks come in at 10 attacks/sec, and the backends are not CPU-limited (which they might be), shouldn't this total 600 seconds? It seems like even the fastest backend, when run in this mode, can only complete in 640 seconds. > My suggestion would be to > a) modify the backends where the underlying command supports it to > collect block/release commands before sending them to the firewall > (although the non-firewalld backends can keep up with very high > blockrates, the grouping still reduces CPU load quite a bit). This would be great. > b) suggest that people switch from firewalld to nft-sets/iptables > backends if performance is still an issue (they can even keep using > firewalld for non-sshguard stuff) > c) talk to firewalld upstream about maybe making firewalld more > efficient ... For now, I think it would also be useful to clearly document that firewalld is slow. I wonder what upstream will say what you're using firewalld for, heh. > I can create a PR for (a), though someone else will have to test the pf > and ipfw backends, I don't have any system that uses them. The rest can > remain unmodified, as the respective commands can't add multiple IPs in > one call. I can test pf and ipfw. If need be, they can also be separate backends (currently they share sshg-fw.in, which does not need to be the case). Regards, Kevin |
|
From: Christopher E. <ce...@lc...> - 2020-08-31 12:13:05
|
On 30.08.20 23:31, Bu...@Bu... wrote: > Two thoughts... 25% is an interesting number, it implies either you > are actually getting 2Core of 2Threads each in your VM (saturating a > single processor) Yeah, that seems to be the case. Qemu consistently reports ~50% CPU usage whenever I saturate any of the backends, and the host also shows 1 core fully utilised. > Second thought ... rejigger your test and run it to native ipset add > commands. That will give you a best-case performance value and > eliminate the firewall-cmd and dbus overhead. That should be your > yard-stick. I did some more testing over the weekend with the nftables & iptables backends as well. I didn't extensively test ipset, but it can add 6000 IPs in 5s at maximum throughput. The guest runs Fedora, where firewalld uses nftables as the backend, so the nft-sets backend is the relevant comparison. All this was done using a fixed list of 6000 block commands using randomly generated IPs (roughly 50/50 IPv4/IPv6). Test 1, maximum throughput: Just dumping the list into the backend via 'cat iplist | <backend>': - 2.4.1 backends: firewalld: ~1400s to complete, 50% CPU (i.e. 1 core maxed out) nft-sets: ~50s to complete, 50% CPU iptables: ~22s to complete, 50% CPU ipset: ~ 5s to complete, 50% CPU - 1-second-batch: (i.e. one command in this case) firewalld: ~35s to complete nft-sets: 1-2s to complete iptables: 1-2s to complete - that's 4.3 / 120 / 273 / 1200 blocks per second for the 2.4.1 firewalld/nftables/iptables/ipset backends, and those are indeed roughly the rates at which the backends starts maxing out the CPU. - adding all IPs in one command completes more or less instantly with iptables/nft-sets (ipset can't do that), but takes a whopping 35 seconds with firewalld. That's .... not very fast. Test 2, CPU load at 10 blocks/second: Dumping the list via 'while read line; do echo $line; sleep 0.1; done < iplist' | <backend>: - 2.4.1 backends: firewalld: ~1400s to complete, 50% CPU (i.e. 1 core) nft-sets: ~640s to complete, ~22% CPU iptables: ~640s to complete, ~12% CPU - 1-second-batch: firewalld: ~640s to complete, ~27% CPU nft-sets: ~640s to complete ~4% CPU iptables: ~640s to complete ~4% CPU The '1s-batch' firewalld backend can deal with ~200 commands per second before maxing out the CPU. Given that the nftables commands seem to be slower than iptables/ipset commands, I also checked what happens when you switch firewalld to use iptables as the backend, answer, nothing much. There are two other interaction modes for firewalld that I haven't tested, direct (allows direct access to the backends' tables/chains) & passthrough (passes commands unparsed to backend). I don't know if they're much faster, but I'd be against using them if it can be avoided, because that would mean that sshguard would have to start detecting things about firewalld's configuration. My suggestion would be to a) modify the backends where the underlying command supports it to collect block/release commands before sending them to the firewall (although the non-firewalld backends can keep up with very high blockrates, the grouping still reduces CPU load quite a bit). b) suggest that people switch from firewalld to nft-sets/iptables backends if performance is still an issue (they can even keep using firewalld for non-sshguard stuff) c) talk to firewalld upstream about maybe making firewalld more efficient ... I can create a PR for (a), though someone else will have to test the pf and ipfw backends, I don't have any system that uses them. The rest can remain unmodified, as the respective commands can't add multiple IPs in one call. Christopher |
|
From: <Bu...@Bu...> - 2020-08-30 21:44:28
|
Two thoughts... 25% is an interesting number, it implies either you are actually getting 2Core of 2Threads each in your VM (saturating a single processor) or something is seriously restricting your ability to beat a core to death. VMware statistics would be interesting too. Second thought ... rejigger your test and run it to native ipset add commands. That will give you a best-case performance value and eliminate the firewall-cmd and dbus overhead. That should be your yard-stick. -----Burton Date: Fri, 28 Aug 2020 17:18:38 +0200 From: Christopher Engelhard <ce...@lc...> To: ssh...@li... Subject: Re: [SSHGuard-users] performance when using firewalld: adding/removing many entries at once Message-ID: <86a...@lc...> Content-Type: text/plain; charset=utf-8 On 28.08.20 13:26, Christopher Engelhard wrote: > & I haven't done any benchmarking yet. OK, did some playing around. All testing with 6000 random IP block requests at 100/s (i.e. over 1 min) on a 2 core/8GB RAM virtual machine) Using the firewalld backend of 2.4.1, it takes 24min to add all IPs to the blocklist, CPU load during that time is fairly consistently 25% for firewalld & 5-10% for firewalld-cmd. Using the "collecting" version & collecting requests for 1s doesn't significantly change the overall load, but the process now completes in just over 3 min. Collecting for 5s or 10s reduces this a bit further to ~2:30 min, again with no significant load reduction in firewalld. Firewall-cmd only causes significant CPU load whenever it is triggered by the backend. Given that there's no difference between 5s or 10s grouping in overall runtime, I'd say that pretty much reflects the speed at which firewalld is able to add ips to the ipset. I think the bottleneck is firewalld processing the commands it receives on DBus, not firewall-cmd sending them off, otherwise I'd expect to see much less load on firewalld in the first test compared to the later ones. Christopher P.S.: The total number of IPs that ended up in the ipsets dropped to ~5200 in the last two tests, so it is possible that I'm running into some issues with maximum string lengths/command lengths or so. Probably not a good idea to set it this high outside of testing. |
|
From: Christopher E. <ce...@lc...> - 2020-08-28 15:19:01
|
On 28.08.20 13:26, Christopher Engelhard wrote: > & I haven't done any benchmarking yet. OK, did some playing around. All testing with 6000 random IP block requests at 100/s (i.e. over 1 min) on a 2 core/8GB RAM virtual machine) Using the firewalld backend of 2.4.1, it takes 24min to add all IPs to the blocklist, CPU load during that time is fairly consistently 25% for firewalld & 5-10% for firewalld-cmd. Using the "collecting" version & collecting requests for 1s doesn't significantly change the overall load, but the process now completes in just over 3 min. Collecting for 5s or 10s reduces this a bit further to ~2:30 min, again with no significant load reduction in firewalld. Firewall-cmd only causes significant CPU load whenever it is triggered by the backend. Given that there's no difference between 5s or 10s grouping in overall runtime, I'd say that pretty much reflects the speed at which firewalld is able to add ips to the ipset. I think the bottleneck is firewalld processing the commands it receives on DBus, not firewall-cmd sending them off, otherwise I'd expect to see much less load on firewalld in the first test compared to the later ones. Christopher P.S.: The total number of IPs that ended up in the ipsets dropped to ~5200 in the last two tests, so it is possible that I'm running into some issues with maximum string lengths/command lengths or so. Probably not a good idea to set it this high outside of testing. |
|
From: Christopher E. <ce...@lc...> - 2020-08-28 11:26:58
|
I think it's a good idea to try to see if we can make the calls to firewalld faster, but in the meantime, I've taken a stab at letting the backend function group requests here [1] (branch: batch-process). If you're on Fedora, you can install that version from Copr [2] as well. The backend now collects all requests that come within 1 second and sends them as one to the fw_block()/fw_release() functions. Those then can do something smart with that. So far, only the firewalld backend tries to be smart, all the others just disaggregate the combined request and then do what they did before. It seems to work, but I have only tested the firewalld and null backends & I haven't done any benchmarking yet. On 27.08.20 21:15, Felix Schwarz wrote: > I guess I should try to create a test scenario where I call add random IP > addresses via firewall-cmd and check if I also see high CPU load. Ideally I'd > see much a lower CPU load - though I'm a bit swamped currently so it'll take a > few days. You can use the fakeip.sh [3] script from my fork (or from the doc dir of the forked sshguard package) to send random block/release requests to the sshguard backends. There shouldn't be significant overhead in interacting with firewall-cmd in that manner. Just pipe the output into /usr/libexec/sshguard/sshg-fw-firewalld. Christopher [1] https://bitbucket.org/lcts/sshguard/branches/compare/lcts/sshguard:batch-process%0Dsshguard/sshguard:master#diff [2] https://copr.fedorainfracloud.org/coprs/lcts/fedora-rpm-forks/build/1637820/ [3] https://bitbucket.org/lcts/sshguard/src/batch-process/fakeip.sh |
|
From: Felix S. <fel...@os...> - 2020-08-27 19:15:57
|
Am 27.08.20 um 18:52 schrieb Kevin Zheng: >> Right now SSHguard log output about blocked IP addresses is delayed >> by ~4-7 minutes. > > What do you mean by this? Is it that sshg-blocker warns about the > attacker 4-7 minutes after the attack has begun, or is it simply that > attacks are not blocked until 4-7 minutes later? I opened "journalctl -f -u sshguard" and "watch systemctl status sshguard" in a a terminal. The times shown in these log outputs lagged ~4-7 minutes while they are near real time otherwise. (And there was a continuous stream of new login attempts so there should have been a many messages about new blocks.) The firewalld ipset contained ~600 ip addresses at some point but after ~16 hours the flooding eventually stopped so I can not check all the details anymore. However: > If your `top` or `ps` shows wait states, could you check if sshg-blocker > is running, idle, or being blocked something by a pipe write? "systemctl status sshguard" shows also child processes and what I could see is that there was always a call to "firewall-cmd" visible. Also my manual calls to "firewall-cmd" in a separate terminal took pretty long (a few seconds per invocation). Usually these commands are pretty quick. I guess I should try to create a test scenario where I call add random IP addresses via firewall-cmd and check if I also see high CPU load. Ideally I'd see much a lower CPU load - though I'm a bit swamped currently so it'll take a few days. Felix |
|
From: Christopher E. <ce...@lc...> - 2020-08-27 19:06:10
|
On 27.08.20 19:23, Kevin Zheng wrote: > I wonder how firewalld talks to its underlying backends? Could that be > the culprit? SSHGuard does install iptables and nft backends, too. nftables has python bindings for libnftables, and firewalld uses those. iptables & ipset are accessed via their commandline utilities, I think. |
|
From: Kevin Z. <kev...@gm...> - 2020-08-27 17:23:13
|
On 8/27/20 10:14 AM, Christopher Engelhard wrote: > firewall-cmd itself talks to firewalld via the DBus interface, so one > could maybe save some time by using DBus directly, if firewall-cmd is in > fact the culprit. > > The direct interface bypasses firewalld and sends stuff directly to the > underlying firewall backend, which might be a problem because that > backend could be iptables or nftables or whatever else firewalld supports. I see, firewalld is already one layer of indirection. I wonder how firewalld talks to its underlying backends? Could that be the culprit? SSHGuard does install iptables and nft backends, too. |
|
From: Christopher E. <ce...@lc...> - 2020-08-27 17:15:00
|
On 27.08.20 18:52, Kevin Zheng wrote: > I suspect, without any measurement to back my suspicion, that the > slowness comes from trying to invoke a separate firewall-cmd process so > many times. Are there other ways to talk to firewalld without spinning > up a process? I don't use firewalld, but some searching shows that > there's a D-Bus interface and a "direct" interface. How does the > firewalld GUI talk to firewalld? Through firewall-cmd or one of the > interfaces I mentioned? firewall-cmd itself talks to firewalld via the DBus interface, so one could maybe save some time by using DBus directly, if firewall-cmd is in fact the culprit. The direct interface bypasses firewalld and sends stuff directly to the underlying firewall backend, which might be a problem because that backend could be iptables or nftables or whatever else firewalld supports. |
|
From: Kevin Z. <kev...@gm...> - 2020-08-27 16:53:03
|
It's been a while since we've heard of an attack like this on the SSHGuard mailing list. > Right now SSHguard log output about blocked IP addresses is delayed > by ~4-7 minutes. What do you mean by this? Is it that sshg-blocker warns about the attacker 4-7 minutes after the attack has begun, or is it simply that attacks are not blocked until 4-7 minutes later? If your `top` or `ps` shows wait states, could you check if sshg-blocker is running, idle, or being blocked something by a pipe write? If it is blocked by sshg-fw, then this suggests what you expect is true, that sshg-fw is inefficient. >> I think SSHguard uses firewalld's API inefficiently as it seems to add/remove >> only a single IP per CLI call. I suspect this leads to high CPU usage by >> firewalld when SSHguard needs to block many addresses. >> >> firewalld also offers options to add/remove many items at once. Do you think >> SSHguard could use these options? > > One could maybe modify the loop in sshg-fw.in [1] to collect the > addresses etc. in a bash array: > > args=( addr1 addrtype1 cidr1 addr2 addrtype2 cidr2 ...) > > and pass that to the fw_block()-etc functions if the previous line was > read more than a given interval ago, like a second or so: Like discussed here, one approach could be to collect multiple IP's and run firewall-cmd once per second. I also want to mention that sshg-fw is just an ordinary program; you can write a drop-in substitute, not necessarily in Bourne shell. I suspect, without any measurement to back my suspicion, that the slowness comes from trying to invoke a separate firewall-cmd process so many times. Are there other ways to talk to firewalld without spinning up a process? I don't use firewalld, but some searching shows that there's a D-Bus interface and a "direct" interface. How does the firewalld GUI talk to firewalld? Through firewall-cmd or one of the interfaces I mentioned? Unfortunately, I don't have a Fedora installation around and won't be able to test. But this is something we should keep in mind for all the other firewall backends. (I use pf, and sshg-fw-pf does the expensive thing of spinning up a pfctl process for each address that it blocks. It would be more efficient if I rewrote it using the ioctl(2) interface to talk directly to the /dev/pf device node.) |
|
From: Christopher E. <ce...@lc...> - 2020-08-27 08:55:09
|
On 27.08.20 10:21, Christopher Engelhard wrote: > One could maybe modify the loop in sshg-fw.in [1] to collect the > addresses etc. in a bash array [...] and pass that to the fw_block()-etc functions ... or even simpler, collect in a string and shift through them in the receiving functions. Christopher |
|
From: Christopher E. <ce...@lc...> - 2020-08-27 08:41:01
|
Hi,
On 27.08.20 09:14, Felix Schwarz wrote:
> Hi,
>
> short version:
> I think SSHguard uses firewalld's API inefficiently as it seems to add/remove
> only a single IP per CLI call. I suspect this leads to high CPU usage by
> firewalld when SSHguard needs to block many addresses.
>
> firewalld also offers options to add/remove many items at once. Do you think
> SSHguard could use these options?
One could maybe modify the loop in sshg-fw.in [1] to collect the
addresses etc. in a bash array:
args=( addr1 addrtype1 cidr1 addr2 addrtype2 cidr2 ...)
and pass that to the fw_block()-etc functions if the previous line was
read more than a given interval ago, like a second or so:
<add vars to argarray>
if [[ $(( $CURRENTTIME - $LASTTIME )) -gt 1 ]]; then
fw_block <argarray>
<clear argarray>
fi
LASTTIME=$CURRENTTIME
The backends would then be modified to loop over the arg array instead
of using the parameters directly:
arg=($@)
for ((i=0; i < $#; i+=3)); do
addr="${arg[i]}"
addrtype="${arg[i+1]}"
cidr="${arg[i+2]}"
<do stuff>
done
If the backend can not do multiple addresses, <do stuff> would be the
same as before, but if it can, it could use the loop to assemble the
actual command to run and then only trigger the backend once.
> Even though the server is probed for more than 10 hours now SSHguard still
> sees new IP addresses so I don't dare to hope that the attacker will be
> running out of new IP addresses soon.
Ouch.
Christopher
[1] https://bitbucket.org/sshguard/sshguard/src/master/src/fw/sshg-fw.in
|
|
From: Felix S. <fel...@os...> - 2020-08-27 07:30:37
|
Hi, short version: I think SSHguard uses firewalld's API inefficiently as it seems to add/remove only a single IP per CLI call. I suspect this leads to high CPU usage by firewalld when SSHguard needs to block many addresses. firewalld also offers options to add/remove many items at once. Do you think SSHguard could use these options? Felix background: I'm a satisfied user of SSHguard. So far it really works great and I found it easy to set up with CentOS and Fedora. This morning however one of my servers is targeted by some kind of distributed brute force "attack". I see roughly 10-20 SSH login attempts per second from various IP addresses (I guess a few hundred but well below 1000). SSHguard is happily blocking IPs and still working as intended but the CPU of that poor little server is maxed out at 100% (it is a very tiny instance). When using "top" I see that firewalld needs a lot of CPU over longer periods of time. I can see that SSHguard uses "firewall-cmd [...] --add-entry=.../32" and seems to add/remove only a single IP at a time. Based on experience with other software I suspect that inefficient use of the firewalld API might contribute to this high CPU usage. Right now SSHguard log output about blocked IP addresses is delayed by ~4-7 minutes. Even though the server is probed for more than 10 hours now SSHguard still sees new IP addresses so I don't dare to hope that the attacker will be running out of new IP addresses soon. |
|
From: Kevin B. <kev...@gm...> - 2020-08-18 03:03:32
|
On 2020/08/18 01:51, Kevin Zheng wrote: > > I'm inviting anyone interested to submit issues, patches, and pull > requests for the SSHGuard website. The sources for the current version > of the website that's online is here: > > https://bitbucket.org/sshguard/website-static/ > 1) If there was a top-level README in the repo, that would be displayed automatically when browing the top-level of the repo, then it would be able to display a link to the live website. I keep forgetting it is .net and not .org! 2) On the front-page, I think SSHGUARD GEARS would be better written as SSHGUARD FEATURES Kevin |
|
From: Kevin Z. <kev...@gm...> - 2020-08-17 17:51:17
|
Hi there, I'm inviting anyone interested to submit issues, patches, and pull requests for the SSHGuard website. The sources for the current version of the website that's online is here: https://bitbucket.org/sshguard/website-static/ You might be surprised to see M4 files in this repository. The reason this is requires some background: SSHGuard's website was once written in Django. One day the server running it died, and since the Django was outdated it would have been a lot of work to get it working on a new installation. I scraped the themes and HTML and cobbled together a static version of the site as a stopgap measure. The stopgap measure continues to exist to this day. As a result, the website still looks approximately the same, and now changes are made directly to the HTML that's M4'd to generate the header. I am looking for a long term solution, probably my favorite new static site generator, but it'll be some time before I can sit down and port the themes. For now, the in-progress RSS feed is written by hand (news.xml). I'll probably de-link the existing news.html so I don't have to maintain two copies of this. Patches and pull requests against the content on the other pages are welcome. Regards, Kevin |
|
From: Kevin Z. <kev...@gm...> - 2020-08-17 16:46:03
|
On 8/16/20 3:14 PM, Jungle Boogie wrote: > Just curious, do you plan on updating the news.html page? > As lbutlr pointed out, the latest release is from July 9 for 2.2.0, > which was actually from JUly 2018. Yes, and to resurrect the RSS feed, eventually. |
|
From: Jungle B. <jun...@gm...> - 2020-08-16 22:14:30
|
Hi Kevin, On 8/16/2020 10:35 AM, Kevin Zheng wrote: > On 8/16/20 7:57 AM, @lbutlr wrote: >> I saw there was a new version of sshguard so, wondering what the new >> features were, I went to https://www.sshguard.net/news.html which >> told me 2.2 was just released in "July" so, looks like that site is >> not maintained. I then went and looked around the SF pages, but there >> is no announcement there nor a changes list that I could find, just >> the tgz of the new version. >> >> I didn't see anything on this list and the sshguard-maintainers >> archive shows the last post was almost a year ago. >> >> I mean, it's not really important, and I assume there is a changeling >> inside the tgz file, but I am unlikely to see that when I update via >> a package manager. >> >> Short of manually downloading the package and such, where would I >> find the changelog? > > This one is totally my fault: I cut the release but didn't get around to > making the release announcement. Just curious, do you plan on updating the news.html page? As lbutlr pointed out, the latest release is from July 9 for 2.2.0, which was actually from JUly 2018. At any rate, thank you for another release and for continuing to maintain sshguard. > > I'll do this now. |
|
From: Kevin Z. <kev...@gm...> - 2020-08-16 17:40:10
|
Dear SSHGuard users, SSHGuard 2.4.1 is now available. (The release was cut July 31st; this release announcement is late.) **Added** - Recognize RFC 5424 syslog banners - Recognize busybox syslog -S banners - Recognize rsyslog banners - Recognize web services TYPO3, Contao, and Joomla - Update signatures for Dovecot - Update signatures for OpenSSH **Changed** - Whitelist entire 127.0.0.0/8 and ::1 block - Whitelist file allows inline comments **Fixed** - Fix FILES and LOGREADER configuration file options Regards, Kevin |
|
From: Kevin Z. <kev...@gm...> - 2020-08-16 17:35:44
|
On 8/16/20 7:57 AM, @lbutlr wrote: > I saw there was a new version of sshguard so, wondering what the new > features were, I went to https://www.sshguard.net/news.html which > told me 2.2 was just released in "July" so, looks like that site is > not maintained. I then went and looked around the SF pages, but there > is no announcement there nor a changes list that I could find, just > the tgz of the new version. > > I didn't see anything on this list and the sshguard-maintainers > archive shows the last post was almost a year ago. > > I mean, it's not really important, and I assume there is a changeling > inside the tgz file, but I am unlikely to see that when I update via > a package manager. > > Short of manually downloading the package and such, where would I > find the changelog? This one is totally my fault: I cut the release but didn't get around to making the release announcement. I'll do this now. |
|
From: @lbutlr <kr...@kr...> - 2020-08-16 14:57:35
|
I saw there was a new version of sshguard so, wondering what the new features were, I went to https://www.sshguard.net/news.html which told me 2.2 was just released in "July" so, looks like that site is not maintained. I then went and looked around the SF pages, but there is no announcement there nor a changes list that I could find, just the tgz of the new version. I didn't see anything on this list and the sshguard-maintainers archive shows the last post was almost a year ago. I mean, it's not really important, and I assume there is a changeling inside the tgz file, but I am unlikely to see that when I update via a package manager. Short of manually downloading the package and such, where would I find the changelog? -- "Are you pondering what I'm pondering?" Yeah, but I thought Madonna already had a steady bloke!" |
|
From: Jos C. <ssh...@cl...> - 2020-08-11 08:06:24
|
Hello team, Just saw that my blacklist contains several identical ip numbers: 1594465182|260|4|193.35.51.13 1594482213|260|4|5.188.206.194 1594545561|260|4|46.38.145.5 * 1594545565|260|4|46.38.145.4 1594545807|260|4|46.38.145.5 * 1594545891|260|4|46.38.145.254 1594560837|260|4|185.143.73.33 /1594636209|260|4|78.128.113.114 // //1594636209|260|4|78.128.113.114// /1594765288|100|4|211.118.42.219 1594991575|260|4|46.38.145.253 /1595278864|260|4|45.95.168.77// //1595283662|260|4|45.95.168.77// //1595283662|260|4|45.95.168.77// //1595408787|260|4|45.148.10.98 // //1595408787|260|4|45.148.10.98// //1595408787|260|4|45.148.10.98/ /1595801798|260|4|185.253.217.78// //1595801798|260|4|185.253.217.78// //1595805119|260|4|185.253.217.78// /1596154612|260|4|91.191.209.188 1596398311|260|4|193.56.28.20 1596880381|210|4|185.210.217.116 Can you tell what I should check in order to solve this? Thanks, Jos -- With both feed on the ground you will never make a step forward |
|
From: @lbutlr <kr...@kr...> - 2020-07-15 09:22:57
|
On 15 Jul 2020, at 01:58, Kevin Zheng <kev...@gm...> wrote: > On 7/14/20 2:57 PM, @lbutlr wrote: >> auth.log contains entries like: >> >> sshd[81715] error: PAM: Authentication error for root from 116.98.172.159 >> sshd[81715] Connection closed by authenticating user root 116.98.172.159 port 49832 [preauth] > > Are these the exact lines from your auth.log? SSHGuard is not detecting > these because it does not recognize your log header. Is this syslogd? Other than the time stamp and server name, yes. I thought that information was not relevant. > This is not recognized by 2.4.0 because OpenSSH recently changed their > log format. The parser in Git does recognize this as an attack. I will > cut a release soon. > >> So, I try to manually trigger it by manually appending the above bloglines back to auth.log from another session: >> >> # which sshguard >> /usr/local/sbin/sshguard >> # env SSHGUARD_DEBUG=foo /usr/local/sbin/sshguard >> /usr/local/sbin/sshguard: cannot create : No such file or directory > > The version in FreeBSD ports has a non-standard addition that checks for > the existence of the PID file. I think it's just saying it can't write > to the PID file, because $PID_FILE is empty? I'll investigate further. > >> sshguard 94135 - - whitelist: add '***' as plain IPv4. >> sshguard 94135 - - whitelist: add plain IPv4 ***. >> sshguard 94135 - - whitelist: add IPv4 block: ***. >> sshguard 94135 - - whitelist: add IPv4 block: ***. >> sshguard 94135 - - blacklist: blocking 4832 addresses >> sshguard 94135 - - whitelist: add '127.0.0.1' as plain IPv4. >> sshguard 94135 - - whitelist: add plain IPv4 127.0.0.1. >> sshguard 94135 - - Now monitoring attacks. > > SSHGuard does not accept attacks from standard input. No, I added the log lines to auth.log in another session (this works fine on a different machine, at least in terms of when I logged in it showed that the IP used was in the whitelist. > There are several things you can do to troubleshoot SSHGuard, but an > easy starter is to syscall trace it: > > $ truss -f /usr/local/sbin/sshguard I'll give that a shot. What are some other steps to troubleshoot. -- 'They say that whoever pays the piper calls the tune.' 'But, gentlemen,' said Mr Saveloy, 'whoever holds a knife to the piper's throat writes the symphony.' --Interesting Times |
|
From: Kevin Z. <kev...@gm...> - 2020-07-15 07:58:59
|
Hi there, So, a few things: On 7/14/20 2:57 PM, @lbutlr wrote: > auth.log contains entries like: > > sshd[81715] error: PAM: Authentication error for root from 116.98.172.159 > sshd[81715] Connection closed by authenticating user root 116.98.172.159 port 49832 [preauth] Are these the exact lines from your auth.log? SSHGuard is not detecting these because it does not recognize your log header. Is this syslogd? You can test the parser by itself (on FreeBSD) using: $ /usr/local/libexec/sshg-parser It reads input line by line, does nothing if it does not detect an attack, and prints something if it does. You have an example attack below: Jul 14 14:07:05 mail.covisp.net sshd[81715] error: PAM: Authentication error for root from 116.98.172.159 This is not recognized by 2.4.0 because OpenSSH recently changed their log format. The parser in Git does recognize this as an attack. I will cut a release soon. > So, I try to manually trigger it by manually appending the above bloglines back to auth.log from another session: > > # which sshguard > /usr/local/sbin/sshguard > # env SSHGUARD_DEBUG=foo /usr/local/sbin/sshguard > /usr/local/sbin/sshguard: cannot create : No such file or directory The version in FreeBSD ports has a non-standard addition that checks for the existence of the PID file. I think it's just saying it can't write to the PID file, because $PID_FILE is empty? I'll investigate further. > sshguard 94135 - - whitelist: add '***' as plain IPv4. > sshguard 94135 - - whitelist: add plain IPv4 ***. > sshguard 94135 - - whitelist: add IPv4 block: ***. > sshguard 94135 - - whitelist: add IPv4 block: ***. > sshguard 94135 - - blacklist: blocking 4832 addresses > sshguard 94135 - - whitelist: add '127.0.0.1' as plain IPv4. > sshguard 94135 - - whitelist: add plain IPv4 127.0.0.1. > sshguard 94135 - - Now monitoring attacks. SSHGuard does not accept attacks from standard input. So, seems like some things are wrong; some things that should be fixed when we cut a release. There are several things you can do to troubleshoot SSHGuard, but an easy starter is to syscall trace it: $ truss -f /usr/local/sbin/sshguard If you could trace it for 10 seconds or so and attach the output, I might be able to see what's going on. Good luck, Kevin -- Kevin Zheng kev...@gm... | ke...@be... XMPP: ke...@ee... |
|
From: @lbutlr <kr...@kr...> - 2020-07-14 22:16:13
|
sshguard-2.4.0_2,1 on FreeBSD 12.1 If I check my sshguard table, it returns no entries # pfctl -t sshguard -T show # auth.log contains entries like: sshd[81715] error: PAM: Authentication error for root from 116.98.172.159 sshd[81715] Connection closed by authenticating user root 116.98.172.159 port 49832 [preauth] I can manually add an IP to the sshguard table, but I cannot see any evidence that sshguard is doing anything and "sshg" appears in no log files in /var/log/. # pfctl -t badguys -T show | grep 116.98.172.159 # pfctl -t sshguard -T add 116.98.172.159 1/1 addresses added. # pfctl -t sshguard -T show 116.98.172.159 So, PF appears to be fine. So, I try to manually trigger it by manually appending the above bloglines back to auth.log from another session: # which sshguard /usr/local/sbin/sshguard # env SSHGUARD_DEBUG=foo /usr/local/sbin/sshguard /usr/local/sbin/sshguard: cannot create : No such file or directory sshguard 94135 - - whitelist: add '***' as plain IPv4. sshguard 94135 - - whitelist: add plain IPv4 ***. sshguard 94135 - - whitelist: add IPv4 block: ***. sshguard 94135 - - whitelist: add IPv4 block: ***. sshguard 94135 - - blacklist: blocking 4832 addresses sshguard 94135 - - whitelist: add '127.0.0.1' as plain IPv4. sshguard 94135 - - whitelist: add plain IPv4 127.0.0.1. sshguard 94135 - - Now monitoring attacks. Jul 14 14:07:05 mail.covisp.net sshd[81715] error: PAM: Authentication error for root from 116.98.172.159 Jul 14 14:07:08 mail.covisp.net sshd[81715] Connection closed by authenticating user root 116.98.172.159 port 49832 [preauth] Nothing. ?? ps output root 843 0.0 0.1 4884 1928 - Is 7Jun20 0:00.00 /bin/sh /usr/local/sbin/sshguard -b /usr/local/etc/sshguard.blacklist -w /usr/local/etc/sshguard.whitelist -b 120:/var/db/sshguard/blacklist.db -i /var/run/sshguard.pid root 848 0.0 0.1 5560 2692 - IC 7Jun20 0:00.15 /usr/local/libexec/sshg-blocker -a 30 -b 120:/var/db/sshguard/blacklist.db -p 1200 -s 18000 -w /usr/local/etc/sshguard.whitelist /usr/local/etc/sshguard.conf: BACKEND="/usr/local/libexec/sshg-fw-pf" FILES="/var/log/auth.log /var/log/mail.log /var/log/debug.log /var/log/xferlog" THRESHOLD=30 BLOCK_TIME=1200 DETECTION_TIME=18000 BLACKLIST_FILE=30:/var/db/sshguard/blacklist.db WHITELIST_FILE=/usr/local/etc/sshguard.whitelist #EOF /etc/pf.conf: ext=em0 table <goodguys> { **someIPs** } persist table <badguys> { } persist table <sshguard> persist block in quick on $ext from <sshguard> label "sshguardblock" block in quick on $ext from <badguys> label "COUNTRY BLOCKS" pass in quick on $ext proto tcp from <goodguys> to ($ext) port ssh keep state pass in on $ext proto tcp from any to ($ext) port ssh keep state (max-src-conn 5, max-src-conn-rate 4/300, overload <badguys> flush global) #EOF |