Traffic summary using iptables: Difference between revisions
Jump to navigation
Jump to search
(Created page with "= Overview = Using iptables and a simple perl script to analyze data traffic. = Prereq = <code> Perl ver 5.6+ Iptables </code> = Iptables setup = = Perl script to grab data = ...") |
No edit summary |
||
Line 1: | Line 1: | ||
= Overview = |
= Overview = |
||
Using iptables and a simple perl script to analyze data traffic. |
Using iptables and a simple perl script to analyze data traffic. |
||
= Prereq = |
|||
<code> |
|||
Perl ver 5.6+ |
|||
Iptables |
|||
⚫ | |||
= Iptables setup = |
= Iptables setup = |
||
I've added two new chains; in- and outgoing traffic inside forwarding rules. I've added these to my firewall startup script: |
|||
It is pretty self-explanatory. |
|||
<pre> |
|||
### Static variables |
|||
IPT=/sbin/iptables |
|||
### Static machines |
|||
WEB=192.168.0.210 |
|||
MAIL=192.168.0.220 |
|||
MIRROR=192.168.0.240 |
|||
### Create logging of traffic (assuming eth0 is the wan interface and/or the one doing the forwarding) |
|||
$IPT -N TRAFFIC_ACCT_IN |
|||
$IPT -N TRAFFIC_ACCT_OUT |
|||
$IPT -I FORWARD -i eth0 -j TRAFFIC_ACCT_IN |
|||
$IPT -I FORWARD -o eth0 -j TRAFFIC_ACCT_OUT |
|||
$IPT -A TRAFFIC_ACCT_IN --dst ${WEB} |
|||
$IPT -A TRAFFIC_ACCT_IN --dst ${MAIL} |
|||
$IPT -A TRAFFIC_ACCT_IN --dst ${MIRROR} |
|||
$IPT -A TRAFFIC_ACCT_OUT --src ${WEB} |
|||
$IPT -A TRAFFIC_ACCT_OUT --src ${MAIL} |
|||
$IPT -A TRAFFIC_ACCT_OUT --src ${MIRROR} |
|||
⚫ | |||
= Perl script to grab data = |
= Perl script to grab data = |
Revision as of 17:14, 12 March 2011
Overview
Using iptables and a simple perl script to analyze data traffic.
Iptables setup
I've added two new chains; in- and outgoing traffic inside forwarding rules. I've added these to my firewall startup script:
It is pretty self-explanatory.
### Static variables IPT=/sbin/iptables ### Static machines WEB=192.168.0.210 MAIL=192.168.0.220 MIRROR=192.168.0.240 ### Create logging of traffic (assuming eth0 is the wan interface and/or the one doing the forwarding) $IPT -N TRAFFIC_ACCT_IN $IPT -N TRAFFIC_ACCT_OUT $IPT -I FORWARD -i eth0 -j TRAFFIC_ACCT_IN $IPT -I FORWARD -o eth0 -j TRAFFIC_ACCT_OUT $IPT -A TRAFFIC_ACCT_IN --dst ${WEB} $IPT -A TRAFFIC_ACCT_IN --dst ${MAIL} $IPT -A TRAFFIC_ACCT_IN --dst ${MIRROR} $IPT -A TRAFFIC_ACCT_OUT --src ${WEB} $IPT -A TRAFFIC_ACCT_OUT --src ${MAIL} $IPT -A TRAFFIC_ACCT_OUT --src ${MIRROR}
Perl script to grab data
I'm putting the data from iptables into a local mysql database. From there I further analyze.
It's a simple two-stage process; 1. Get the data from the newly created chains; 2. Put into db and reset counter.
Mine is based around getting data every hour, so I've made a cron entry for that
Perl script:
#!/usr/bin/perl use strict; use DBI; ## Setup database connection my $dsn = 'dbi:mysql:<dbname>:<ip or localhost>:3306'; my $user = '<username>'; my $pass = '<password>'; my $dbh = DBI->connect($dsn, $user, $pass) or die "Horrible!!\n$DBI::errstr\n"; my ($sec,$min,$hour,$mday,$mon,$year,$wday,$yday,$isdst) = localtime(time); $mon++; $year += 1900; ## Only grab outgoing data my @aData = `/sbin/iptables -L TRAFFIC_ACCT_OUT -n -v -x | awk '\$1 ~ /^[0-9]+\$/ { printf \"%s, %d \\n\", \$7, \$2 }'`; foreach (@aData) { chomp; my @aSplitter = split(/, /, $_); my $sExtraSQL = "ON DUPLICATE KEY UPDATE traffic = ".$dbh->quote($aSplitter[1]); my $sSQL = "INSERT INTO traffic (source, year, month, day, hour, traffic) VALUES (?, ?, ?, ?, ?, ?) $sExtraSQL\n"; my $sth = $dbh->prepare($sSQL); $sth->execute($aSplitter[0], $year, $mon, $mday, $hour, $aSplitter[1]); } my $bResetIptableCounter = `/sbin/iptables -Z TRAFFIC_ACCT_OUT`;
The code is extremely straightforward and no checks really. If I miss out of one hours traffic, it's no biggie, so haven't put much work into that part.
Cron entry
= Database create options = For those wanting it; I've made a unique key formed by year+month+day+hour+source. It's highly inefficient, but I'm dealing with a relatively low amount of data on my end (checking 3 hosts, so for a year I'll have a max of 3 hosts * 24 hours * 365 days ~= 25000 entries). <pre>