This Wiki is focused to perform a mass node upload procedure from a Primary server, assigning the poller ID to designate where the node will be added, this to help with batch node operations, NMIS includes a small script to import nodes from a CSV file. From version 9.1.1G onwards, there are also more detailed tools available, which are described on the page entitled Node Management Tools.


The bulk import script can be found in /usr/local/nmis9/admin/import_nodes.pl and there is a sample CSV file /usr/local/nmis9/admin/samples/import_nodes_sample.csv .

The minimum properties you must have to add a device to NMIS are: name,host,group,community,netType,cluster_id,roleType. Technically, you can use the defaults for group and role and name and host can be the same, so the absolute minimum is host and community. This sample CSV includes the full five properties, and you can use additional ones if needed.

Use the active.XXX field to activate for NMIS, opConfig, opEvents.

This procedure will allow the client to perform a bulk upload of the nodes specifying to which poller they will be added, so at the end of the upload the primary server executes a distribution process facilitating the assignment of the destination by means of the cluster_id attribute.

name,host,group,community,netType,cluster_id,roleType,activated.NMIS,activated.opConfig
002_Test_OMK_Plus_Networks_Megacomputo,10.235.8.227,Branches,public,wan,8e3d0d8e-381d-4369-bb4b-6830d39a2670,core,1,0


To load these devices into NMIS9, run the following command:

/usr/local/nmis9/admin/import_nodes.pl csv=/usr/local/nmis9/admin/import_nodes_sample.csv simulate=f

This will take the CSV file and look for the existence of the node according to the name / node_uuid. If the node exists, it will override its properties for the properties specified in the csv.

Obtaining the Cluster_ID

To obtain the cluster_id of the servers just access the opHA module in the GUI in the following path http://ip_server/es/omk/opHA/peers


The second method is to execute the following command from the console to query the required data.

[root@omk-vm9-centos7 bin]# /usr/local/omk/bin/opha-cli.pl act=list_peers
cluster_id id server_name status
d95af5ee-1bf6-0000-1111-000000000000 614f3ea8626660a3e47f4801 Poller_2 error
8e3d0d8e-381d-0000-1111-000000000000 614f472f626660a3e4887c7a Poller_6 save
9ceb22d9-c713-0000-1111-000000000000 614f819c85acaf93a12ebe86 Poller_3 error
65acfce4-6752-0000-1111-000000000000 614f861285acaf93a138d9fa Poller_5 transfer
c24166fb-79d5-0000-1111-000000000000 6151db4085acaf93a183c947 Poller_4 transfer
f174e362-ebc2-0000-1111-000000000000 6152521c689631c8173dcd4a Poller_1 transfer
bf9d4025-b106-0000-1111-000000000000 61534b43211e79f664076e44 Poller_4 transfer
[root@omk-vm9-centos7 bin]#

Simulation mode

By default, NMIS will run in simulation mode. At the end, you will be able to see whether the node will be created or updated. As an example output:

UPDATE: node=newnode host=127.0.0.1 group=DataCenter
    => Node newnode not saved. Simulation mode.
ADDING: node=import_test3 host=127.0.0.1 group=DataCenter
    => Node import_test3 not saved. Simulation mode.
ADDING: node=import_test1 host=127.0.0.1 group=Branches
    => Node import_test1 not saved. Simulation mode.
ADDING: node=import_test2 host=127.0.0.1 group=Sales
    => Node import_test2 not saved. Simulation mode.

If you are ready to run the command and apply these changes in NMIS, you should use simulate = f. As example output:

UPDATE: node=newnode host=127.0.0.1 group=DataCenter
    => Successfully updated node newnode.
ADDING: node=import_test3 host=127.0.0.1 group=DataCenter
    => Successfully created node import_test3.
ADDING: node=import_test1 host=127.0.0.1 group=Branches
    => Successfully created node import_test1.
ADDING: node=import_test2 host=127.0.0.1 group=Sales
    => Successfully created node import_test2.


Once you have added nodes or modified nodes, an NMIS Update is required which you can run for all nodes or for a single node or just leave it until the next scheduled update for that node is due (default every 24 hours).

If you are running an update for all nodes, it may take some time to complete. The following command shows how to force an update for one node at a time (a good way to distribute the load). You can also schedule an update for all nodes by removing the job.node argument.

./bin/nmis-cli act=schedule job.type=update at="now + 5 minutes" job.node=testnode job.force=1

To run a NMIS update for a single node, optionally with debugging which will result in debug files in /tmp/

./bin/nmis-cli act=schedule job.type=update at="now + 5 minutes" job.node=testnode job.force=1 job.verbosity=9

If you add a large number of devices, it may take some time to complete the addition. This is because the first time you add a node to NMIS, you need to create all the RRD files for the performance data, this only takes a few seconds per file, but each node can have 10 RRD files or more, so this adds up to a large number of seconds very quickly when adding thousands of devices at once.  


  • No labels