You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 9 Next »

opFlow Dashboard is Bare (graphs show no data)

It appears that you are not receiving any flows, have you had a look since you restarted it, it could take 2-5 minutes to start receiving and processing net flow records.

There is a couple of things it could be, which you can verify.

1. Has the IP address of the opFlow server or virtual machine changed?

If so, change net flow configurations in the network devices to send to new IP address.

2. Verify that "flowd" is running

ps -ef | grep flowd

You should see three entries as well as the grep one, e.g.

[root@thor opmantek]# ps -ef | grep flowd
root 13356 1 0 Jun18 ? 00:00:10 flowd: monitor 
_flowd 13357 13356 0 Jun18 ? 00:00:30 flowd: net 
root 27114 1 0 12:40 ? 00:00:00 NMIS opflowd debug=0
root 32567 27106 0 12:51 pts/5 00:00:00 grep flowd

The first two, flowd: are the Netflow daemon receiving flows, the NMIS one is the NMIS opflowd.

2a. If flowd is not running you can start it with the command:

service flowd start

Then repeat the ps -ef command, if it is not running you can check the syslog messages for why:

tail -50 /var/log/messages

Likely causes are full disks or permissions, or folders not existing.

2b. if opflowd is not running you can start it

Start it with the command below

service opflowd start

3. Verify that Mongo is running

[root@thor log]# ps -ef | grep mongo
root 4462 27106 0 12:59 pts/5 00:00:00 grep mongo
root 24809 1 0 Jun19 ? 04:26:07 /usr/local/mongodb/bin/mongod --dbpath /var/mongodb --fork --logpath /var/log/mongodb.log --logappend

Start it with the command below

service mongod start

4. Check the folders are correct

Check that all the folders are the same. Run these commands and make sure that everything is pointing to the right spot.

grep logfile /usr/local/etc/flowd.conf
grep opflow_dir /usr/local/opmantek/conf/opFlow.nmis 
grep mongodbpath /etc/init.d/mongod

It is especially important that the logfile which flowd uses is picked up by opFlow which is the "flowd_data" configuration and this is combined with "<opflow_dir>" to get the path.

grep logfile /usr/local/etc/flowd.conf
logfile "/data/opflow/flowd"
 

grep opflow_dir /usr/local/opmantek/conf/opFlow.nmis 
 '<opflow_dir>' => '/data/opflow',
 'flowd_data' => '<opflow_dir>/flowd',
 
grep mongodbpath /etc/init.d/mongod 
 mongodbpath=/data/mongodb

5. Check your diskspace

Make sure where ever you are putting the flow data and the Mongo DB, you have quite alot of disk space.

df -h /data
Filesystem Size Used Avail Use% Mounted on
/dev/mapper/vg_data-lv_data
           247G  86G  148G  37% /data 

6. Check your Config is up to date

If you have patched your opFlow installation, make sure your configs are up to date.

/usr/local/opmantek/bin/opupdateconfig.pl /usr/local/opmantek/install/opFlow.nmis /usr/local/opmantek/conf/opFlow.nmis
/usr/local/opmantek/bin/opupdateconfig.pl /usr/local/opmantek/install/opCommon.nmis /usr/local/opmantek/conf/opCommon.nmis 

7. Run a purge manually

Purge the raw flow binary flow data and the older database data, this assume you want to keep 7 days of flow binary data and it is located in /var/opflow.

/usr/local/opmantek/bin/opflow_purge_raw_files.sh /var/opflow 7
/usr/local/opmantek/bin/opflowd.pl type=purge

 

8. Are NetFlow packets arriving at the server?

You have verified that flowd and opflowd are both running and still you have no data on your dashboard. There are several things to check:

8a. Check the flowd logfile to make sure it is growing

Find the logfile by checking the flowd.conf file (probably in /usr/local/etc/flowd.conf)

ll /data/opflow/flowd
[root@thor opflow]$ ls -l /data/opflow/flowd
-rw------- 1 root root 4900 Oct  7 10:42 flowd
[root@thor opflow]$ ls -l /data/opflow/flowd
-rw------- 1 root root 6800 Oct  7 10:42 flowd
[root@thor opflow]$ ls -l /data/opflow/flowd
-rw------- 1 root root 7600 Oct  7 10:43 flowd

In this example the file is growing, so flows are making it into the server, if they are not you will see something like this 

[root@thor opflow]$ ls -l /data/opflow/flowd
-rw------- 1 root root 0 Feb  7  2013 flowd
[root@thor opflow]$ ls -l /data/opflow/flowd
-rw------- 1 root root 0 Feb  7  2013 flowd

In this case the file is not growing and more investigation is necessary.

8b. Checking for packets arriving on the interface

 Running tcpdump will tell us if packets are making it to the server 

# change/verify the interface (eth0) and port (if you have changed from the default config)
tcpdump -vni eth0 proto \\udp and port 12345

If no packets are arriving double check the firewall configuration will allow them through: 

iptables -L
 
# seeing something like this means they are, if your output is different it does not mean they cannot make it through, it simply means you will have to take a good look at what your output is telling you
Chain INPUT (policy ACCEPT)
target     prot opt source               destination
Chain FORWARD (policy ACCEPT)
target     prot opt source               destination
Chain OUTPUT (policy ACCEPT)
target     prot opt source               destination

If you see no packets arrive using tcpdump and the firewall is not stopping the packets then you will need to verify the configuration of your node that is sending the netflow packets and that they are going to the correct node/port. One way to check the config on a Cisco device is:

router>sh ip flow export
Flow export v9 is enabled for main cache  Export source and destination details :
  VRF ID : Default
    Destination(1)  192.168.1.7 (12345)
    Destination(2)  192.168.1.42 (12345)
  Version 9 flow records
  25716317 flows exported in 890127 udp datagrams

If that is not the issue you will need to verify that nothing on your network is filtering the packets and preventing them from arriving at the server.

9. Determining where flows are coming from?

To figure out where all the flows / conversations in your DB are coming from you can look at the agents list.  In opFlow 2.5 and below the agents list is only populated from flow data and not from conversations.  The information can be found in mongo quite easily:

mongo
use nmis; // or opFlow, check your config if you are not sure -- 'db_name' => 'opflow',
db.auth("opUserRW","op42flow42");
db.conversations.distinct("agent");

Using the tcpdump command from 8b can also be handy to see what is arriving, keep in mind that unwanted may be dropped/ignored by modifying flowd.conf.

10. Ignoring flow sources

When configurations are copied from one device to another flow configuration can come with them, this can lead to more flows being sent to opFlow than is expected.  The best solution to this problem is to stop the device from sending flows, this cannot always be done (or done in a timely manor). To solve this issue flowd.conf allows setting which devices to accept flows from, or which to ignore.

Editing /usr/local/etc/flowd.conf

# accept from a specific source
flow source 192.168.1.1
# or from a subnet
flow source 192.168.1.0/24
 
# more examples can be found in flowd.conf
  • No labels