Hello,
This is related to v1.7.1.
I have a box with a single NIC.
I would like to have netflow packets generated for the traffic that is hitting the NIC (I am mirroring a port).
I would like these netflow packets sent to the interface itself where ntop is listening for netflow packets.
I have ntop bound to the local IP of the box, so I would like ipt_NETFLOW to direct packets down the network stack to the local IP (versus the loopback address).
I am seeing some odd behavior when I use /etc/modprobe.d/ipt_netflow.conf to configure the module.
The IP seems to not be set properly.
[root@server ~]# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
NETFLOW all -- anywhere anywhere NETFLOW
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
NETFLOW all -- anywhere anywhere NETFLOW
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
NETFLOW all -- anywhere anywhere NETFLOW
ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp spt:https
[root@server ~]# cat /proc/net/stat/ipt_netflow
Flows: active 3 (peak 18 reached 0d0h6m ago), mem 0K
Hash: size 8192 (mem 32K), metric 1.0, 1.0, 1.0, 1.0. MemTraf: 871 pkt, 28 K (pdu 2, 458).
Timeout: active 1800, inactive 15. Maxflows 2000000
Rate: 552 bits/sec, 1 packets/sec; Avg 1 min: 378 bps, 0 pps; 5 min: 537 bps, 0 pps
cpu# stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
Total stat: 0 1049 105, 0 0 0 0, sock: 39 0 0, 5 K, traffic: 1154, 0 MB, drop: 0, 0 K
cpu0 stat: 0 285 60, 0 0 0 0, sock: 39 0 0, 5 K, traffic: 345, 0 MB, drop: 0, 0 K
cpu1 stat: 0 235 11, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 246, 0 MB, drop: 0, 0 K
cpu2 stat: 0 286 28, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 314, 0 MB, drop: 0, 0 K
cpu3 stat: 0 243 6, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 249, 0 MB, drop: 0, 0 K
sock0: 127.0.0.1:2055, sndbuf 112640, filled 1, peak 1; err: sndbuf reached 0, other 0
[root@server ~]# sysctl -a | grep net.netflow
net.netflow.active_timeout = 1800
net.netflow.inactive_timeout = 15
net.netflow.debug = 0
net.netflow.hashsize = 8192
net.netflow.sndbuf = 112640
net.netflow.destination = 127.0.0.1:2055
net.netflow.aggregation =
net.netflow.maxflows = 2000000
net.netflow.flush = 0
[root@server ~]# cat /etc/modprobe.d/ipt_netflow.conf
options destination=192.168.20.11:2055
If I then use sysctl to write the net.netflow.destination to the local address, and tcpdump port 2055, I see nothing:
[root@server ~]# sysctl -w net.netflow.destination="192.168.20.11:2055"
net.netflow.destination = 192.168.20.11:2055
[root@server ~]# tcpdump -c 100 port 2055
tcpdump: verbose output suppressed, use -v or -vv for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), capture size 65535 bytes
[nothing]
How do I pre-configure ipt_NETFLOW if /etc/modprobe.d/ipt_netflow.conf doesn't work?
Where is my config flaw if no netflow packets are being generated/sent to the target address?
Thanks,
Matt
1. Your modprobe format is wrong.
$ man modprobe.conf
...
options modulename option...
...
So it should be:
options ipt_NETFLOW destination=192.168.20.11:2055
2.
Are you sure your eth0 have ip 192.168.20.11 ?
Thanks for replying.
Okay, I have changed the modprobe syntax. Thanks!
I am sure that eth0 has this IP.
Module using sockets to connect to destination address, so kernel should route packets properly. You need to investigate it furhter...
Thanks.
After correcting the syntax for modprobe.conf as:
[root@server ~]# cat /etc/modprobe.d/ipt_netflow.conf
options ipt_NETFLOW destination=192.168.20.11:2055
I see the following info (note the lack of a socket):
[root@server ~]# cat /proc/net/stat/ipt_netflow
Flows: active 12 (peak 15 reached 0d0h0m ago), mem 0K
Hash: size 8192 (mem 32K), metric 1.0, 1.0, 1.0, 1.0. MemTraf: 76 pkt, 7 K (pdu 14, 1223).
Timeout: active 1800, inactive 15. Maxflows 2000000
Rate: 4172 bits/sec, 6 packets/sec; Avg 1 min: 1439 bps, 0 pps; 5 min: 344 bps, 0 pps
cpu# stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
Total stat: 0 61 15, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 76, 0 MB, drop: 0, 0 K
cpu0 stat: 0 26 6, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 32, 0 MB, drop: 0, 0 K
cpu1 stat: 0 10 0, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 10, 0 MB, drop: 0, 0 K
cpu2 stat: 0 18 8, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 26, 0 MB, drop: 0, 0 K
cpu3 stat: 0 7 1, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 8, 0 MB, drop: 0, 0 K
[root@server ~]# sysctl -a | grep net.netflow
error: "No such file or directory" reading key "net.netflow.sndbuf"
net.netflow.active_timeout = 1800
net.netflow.inactive_timeout = 15
net.netflow.debug = 0
net.netflow.hashsize = 8192
net.netflow.destination = 192.168.2.115:2055
net.netflow.aggregation =
net.netflow.maxflows = 2000000
net.netflow.flush = 0
I am working to configure as I have noted: http://mbrownnyc.wordpress.com/2011/12/06/implement-netflow-on-centos/
At this point, I believe I understand that everything appears correctly configured. Can you provide any further information on what to investigate?
Thanks,
Matt
Check dmesg.
You're continued input is appreciated.
dmesg revealed: ipt_NETFLOW module is trying to load before eth0 is available, hence it fails to bind to the not ready interface.
I then perform the following:
service iptables stop
rmmod ipt_NETFLOW
modprobe ipt_NETFLOW
service iptables start
Therefore, disregarding these weird problems with the loading order of the module(s), it appears that there is an issue with ipt_NETFLOW.
cat /proc/net/stat/ipt_netflow lists sock0 as the IP.
tcpdump -c 100 port 2055 still reveals no packets on 2055
My iptables chains looks:
[root@server ~]# iptables --list
Chain INPUT (policy ACCEPT)
target prot opt source destination
NETFLOW all -- anywhere anywhere NETFLOW
ACCEPT all -- anywhere anywhere state RELATED,ESTABLISHED
ACCEPT icmp -- anywhere anywhere
ACCEPT all -- anywhere anywhere
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:ssh
ACCEPT tcp -- anywhere anywhere state NEW tcp dpt:https
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain FORWARD (policy ACCEPT)
target prot opt source destination
NETFLOW all -- anywhere anywhere NETFLOW
REJECT all -- anywhere anywhere reject-with icmp-host-prohibited
Chain OUTPUT (policy ACCEPT)
target prot opt source destination
NETFLOW all -- anywhere anywhere NETFLOW
ACCEPT tcp -- anywhere anywhere state NEW,ESTABLISHED tcp spt:https
I am running ntop listening on :2055. Is there a reason why ipt_NETFLOW can't send on the same port?
[root@server ~]# lsof -i :2055
COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME
ntop 1249 ntop 8u IPv4 11517 0t0 UDP *:iop
[root@server ~]# netstat -apn | grep :2055
udp 0 0 0.0.0.0:2055 0.0.0.0:* 1249/ntop
udp 0 0 192.168.20.11:33021 192.168.20.11:2055 ESTABLISHED -
Is there anything else specifically that I should be checking?
Thanks again for your time,
Matt
You can also load module with debug=1 option.
You can also view/provide "sock: <ok fail cberr, bytes>" columns from /proc/net/stat/ipt_netflow. Look if there is socket errors increasing. And sock0 line too, that is stat on it.
Btw, why send packets to eth and not on localhost, maybe ntop can listen on 127.0.0.1? It should work perfect on 127.0.0.1.
Thanks again for the reply.
Cool. With debug set to 1, are the messages logged to dmesg? Even the guys at the NST project don't seem to know (http://nst.sourceforge.net/nst/docs/scripts/ipt_netflow.html). I'm hoping this reveals much needed info.
It seems that I see several packets (?) counted as OK (not many though, 90 right now), but I do not see these packets when I sniff the port traffic with tcpdump. I've tested with netcat [yes wat | nc -u 192.168.20.11 2055] and it appears that tcpdump _doesn't_ sniff the packets if they are sourced from the same box, so this test is flawed. I have verified that the packets are sniffed if they are from a remote host.
Judging by the /stat: found/ column and the /traffic: pkt/ count, it seems that ipt_NETFLOW is seeing the packets, but simply isn't generating Netflow data from them:
[root@server ~]# cat /proc/net/stat/ipt_netflow
Flows: active 5 (peak 218 reached 0d0h25m ago), mem 0K
Hash: size 8192 (mem 32K), metric 1.0, 1.0, 1.0, 1.0. MemTraf: 862 pkt, 44 K (pdu 0, 0).
Timeout: active 1800, inactive 15. Maxflows 2000000
Rate: 18248 bits/sec, 11 packets/sec; Avg 1 min: 159636 bps, 9 pps; 5 min: 6937963 bps, 408 pps
cpu# stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
Total stat: 2 251816 1129, 0 0 0 0, sock: 103 0 0, 55 K, traffic: 252945, 475 MB, drop: 0, 0 K
cpu0 stat: 0 5961 214, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 6175, 4 MB, drop: 0, 0 K
cpu1 stat: 2 1908 412, 0 0 0 0, sock: 103 0 0, 55 K, traffic: 2320, 0 MB, drop: 0, 0 K
cpu2 stat: 0 240943 284, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 241227, 468 MB, drop: 0, 0 K
cpu3 stat: 0 3004 219, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 3223, 1 MB, drop: 0, 0 K
sock0: 192.168.20.11:2055, sndbuf 112640, filled 1, peak 1; err: sndbuf reached 0, other 0
As for binding ntop's netflow collector to an IP, it doesn't seem possible reviewing the manpage (http://www.ntop.org/wp-content/uploads/2011/09/ntop-man.html). I was able to bind the http/s server to the local IP though.
By default, and apparently the only way (? TBD), ntop's Netflow collector binds to 0.0.0.0:2055 at a configurable (or first) interface:
[root@server ~]# netstat -apn | grep ntop
tcp 0 0 192.168.20.11:443 0.0.0.0:* LISTEN 1523/ntop
udp 0 0 0.0.0.0:2055 0.0.0.0:* 1523/ntop
I have sent a message over to ntop@listgateway.unipi.it re: the ntop netflow collector binding.
Thanks again,
Matt
Well, there is several debug levels 1, 2, and 3 actually, higher number is more verbose. You may try to increase number on your test box. Yes, they log to dmesg.
Didn't know about NST script, thanks.
Total stat: 2 251816 1129, 0 0 0 0, sock: 103 0 0, 55 K, traffic: 252945, 475 MB, drop: 0, 0 K
It said it sent 103 packets/55K of traffic to destination. By design it send everythign it sees. But it may collect traffic for some time (as it should by Netflow specification). You can control these limits by timeout parameters.
0.0.0.0 is not localhost interface but 'any' interface (i.e. all of them). Localhost is 127.0.0.1.
Thanks aabc.
I will check out the debugging levels to see what they give out.
I was able to see the Netflow packets when they arrived by adding the following iptables rule:
-A OUTPUT -p udp --dport 2055 -j LOG
Then:
tail -f /var/log/messages
Am I wrong to expect that Netflow will report statistics for every single packet that flows to the probe? This is why I opted for a Netflow probe over an sFlow probe (which my switches and firewalls are) in the first place, so that I could see every single conversation that had taken place. The source of my info is: http://www.plixer.com/blog/general/netflow-vs-sflow-it-may-matter-to-you/
If this is not the case, I suppose I will have to rely on ntop's aggregation techniques for a full on packet capture, and I'd prefer not to do that, as it would be much less extensible (I couldn't use anything other than ntop later).
ntop is not seeing WAN traffic right now, and I'm not exactly sure why that is, as the packets incoming onto the NIC are src and dst WAN.
Thanks for all your input and help,
Matt
Netflow will account statistics for every single packet it sees, of course.
+ So it is perfectly suitable for billing or security monitoring. And not just network performance estimation.
Thanks. Does the rate reported (103 Netflow packets for 252945 IP packets) seem reasonable for the settings that I have?
Does my iptables chain look like it should direct ALL packets received by the NIC to the ipt_NETFLOW module?
That ratio seems extremely low to me, but I don't have experience with this.
Each Netflow packet contain stat for 30 flows. So if you have 252945 pkts of traffic in 103 netflow packets, then you have 2455 pkts per flow in average. + Your stat may be skewed a bit by timeouts (flows that is alive may be not reported for half hour by default (customizable parameter)).
I have such example stat (provided to me today from local ISP*):
cpu# stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
Total stat: 59940372537 316606356023 15460868689, 0 0 0 0, sock: 515354987 0 0, 736796582 K, traffic: 332067224712, 256409092 MB, drop: 0, 0 K
There average is 644. But it could depend on your network usage I think. Maybe you have web server with long files, and that example is from heterogeneous network with shorter flows.
Your iptables rules looks ok to me.
* Full text of that stat in case you'll need to compare something:
Flows: active 219079 (peak 307473 reached 11d23h2m ago), mem 18827K
Hash: size 800000 (mem 6250K), metric 1.2, 1.0, 1.0, 1.0. MemTraf: 101761316 pkt, 84386098 K (pdu 0, 0).
Timeout: active 1800, inactive 15. Maxflows 2000000
Rate: 1431485376 bits/sec, 225570 packets/sec; Avg 1 min: 1394299927 bps, 219935 pps; 5 min: 1382963893 bps, 218481 pps
cpu# stat: <search found new, trunc frag alloc maxflows>, sock: <ok fail cberr, bytes>, traffic: <pkt, bytes>, drop: <pkt, bytes>
Total stat: 59940372537 316606356023 15460868689, 0 0 0 0, sock: 515354987 0 0, 736796582 K, traffic: 332067224712, 256409092 MB, drop: 0, 0 K
cpu0 stat: 15189011397 80051558734 3864869473, 0 0 0 0, sock: 515354987 0 0, 736796582 K, traffic: 83916428207, 65227550 MB, drop: 0, 0 K
cpu1 stat: 14912816506 78870799163 3863990367, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 82734789530, 63780255 MB, drop: 0, 0 K
cpu2 stat: 14905353156 78829950846 3867033022, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 82696983868, 63741384 MB, drop: 0, 0 K
cpu3 stat: 14933191478 78854047280 3864975827, 0 0 0 0, sock: 0 0 0, 0 K, traffic: 82719023107, 63659901 MB, drop: 0, 0 K
sock0: 127.0.0.1:20001, sndbuf 83886080, filled 1, peak 1; err: sndbuf reached 0, other 0
Happy holidays.
I suppose I am curious... if my interface doesn't have the promiscuous mode flag set, will the kernel/stack drop packets that are sourced or destined for [one of] the IP[s] bound to the the interface?
I didn't understand point of your question. In any case you can test your setup and see.
testing by running tcpdump, watching dmesg and checking for promiscuous mode with:
cat /sys/class/net/eth0/flags
...
It turns out ntop (even though "told to do so" by default), is not putting the interface into promiscuous mode. This is quite odd and I'll file a bug with ntop project.
For now, I'll test, using the iptables LOG module to see if the packets are being passed up the stack to iptables with and without promiscuous mode set on the interface.
This is more than likely the cause of the "problem" of not seeing traffic I am expecting to see through Netflow.
I kept updating hoping that someone in the future will see this and it will be helpful.
Have a good holiday!
Thanks for all your help,
Matt
It appears that because I don't want ntop to monitor any interface (by using '--interface none'), that ntop doesn't place the interface into promiscuous mode. This is intentional.
This reveals, and I agree in this case, that it is ipt_NETFLOW's "job" to put the interface into promiscuous mode.
Is this an option? If not, may I file an enhancement?
Agree with who? It is not ipt_NETFLOW's job to put anythign in promisc mode. If you want interface in promisc mode you put it.
Okay. I figured it would be a good feature to add so that users can optionally use it. Thanks for your help with the config.
You can put interface into promisc mode with ifconfig.