Overview

Opmantek gets many questions on how to scale NMIS8 for very large networks. There are many factors impacting polling performance and the upper limits of polling is really only limited by the number of processor cores, available memory and disk IO performance.  We have customers managing 10's of 1000's of nodes using NMIS.

Server Specifications

The following server specifications are guidelines for NMIS installations.

 

 SmallMediumLargeMassive
OS Storage20GB20GB20GB20GB
Data Storage40GB60GB140GB280GB
Memory2GB4-8GB8GB16GB+
CPU2 x vCPU2 to 4 x vCPU4 to 6 x vCPU8+ vCPU
Device Count< 500 devices< 1500 devices< 2500 devices

A very large
number of devices

Element Count2000 elements8000 elements14000 elementsA very large number
of elements

Elements are additional data being collected, an interface is an element, a CBQoS class is an element.

An element requires additional SNMP polling to collect the values and then storage on the disk to save the data.

Baseline Performance

Once you know device counts and have an idea of the virtual server (yes you can use physical servers) specifications, it is a good idea to get an idea of baseline performance.

The baseline is to establish how NMIS and the virtual server are performing, add ~50 nodes and see how it is performing.  We have a customer polling ~200 nodes and the average poll cycle is about 20 seconds.  So if you are able to poll 50 nodes in less than 20 seconds your performance should be OK, if it is longer than this, then you might need to look at CPU and Disk performance.  This will also give you an idea of memory footprint.

Adding Nodes to NMIS

If you try to add 1000's of nodes to NMIS at once it is going to take a while to process the nodes for the first time.  You two main choices, add nodes in smaller batches, or stop polling while adding large numbers of nodes.

To The best thing to do if you want to do this, is to stop the polling, run an update cycleThe most likely root cause here is just too much going on when you add all 2000 devices.  When you add nodes and run an update and then a collect, it creates LOTS of RRD files, so the disk IO is at a peak during this process.  

Configuration Considerations

We have been working with our commercial customers using NMIS8 at this sort of scale and it works.  They use between 12 and 16GB with that is handling it.  What has been a problem is Disk IO performance.  
So, I would suggest stopping polling in the crontab, comment out this line:
#*/5 * * * * /usr/local/nmis8/bin/nmis.pl type=collect mthread=true maxthreads=8
Then add nodes, all of them at once or in batches.  Restart fpingd so that it reloads all the nodes you added.
/usr/local/nmis8/bin/fpingd.pl restart=true
Then run an update manually with nohup, if you have 12GB of memory you can give it lots of threads, probably 20 should do it, but watch your memory usage you can probably get to 30 threads.
cd ~
nohup /usr/local/nmis8/bin/nmis.pl type=update mthread=true maxthreads=20&
This will take a while to run the first time, when it finishes, run a collect cycle the same way
nohup /usr/local/nmis8/bin/nmis.pl type=collect mthread=true maxthreads=20&
Now all the big disk activity is done and you should be able to start NMIS polling by letting the poller go again.
*/5 * * * * /usr/local/nmis8/bin/nmis.pl type=collect mthread=true maxthreads=<MAX THREADS BASED ON YOU BASELINE>
You can also control the way NMIS does its thing by moving the summary and thresholding to cron, I would suggest this as a good practice for larger installations.
In CRON:
*/2 * * * * /usr/local/nmis8/bin/nmis.pl type=summary
4-59/5 * * * * /usr/local/nmis8/bin/nmis.pl type=threshold
In Config.nmis:
'threshold_poll_cycle' => 'false',
'nmis_summary_poll_cycle' => 'false',
The other BIG consideration is what is your polling policy, this will have changed from NMIS4 to NMIS8, the more interfaces you collect on, the more disk, cpu and memory you will consume, see how you go with the above, I believe all this should sort you out.