Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

The opConfig application runs on the servers which talk to the end devices, you don't need to install opConfig on your primary server to have it function.  Some aspects of the node admin for opConfig can be done from the primary server server, then sync'd to the pollers.

Use the opAdmin GUI to bulk edit nodes

...

Code Block
/usr/local/nmis9/admin/node_admin.pl act=list group=GROUPNAME

Using a script to

...

activate nodes for opConfig

The following BASH script will let activate opConfig for all nodes on the primary server, opHA will then synchronise this to all the pollers.

...

Code Block
sudo bash enable-opconfig-all-nodes.sh

opConfig Automation with a Primary and Poller Server

Credential Sets Created on all Servers in the Cluster

The first step is to create the credential sets on all servers in the cluster, there are API's for doing this, or you can access the GUI and access opConfig, look for the menu option "System → Edit Credential Sets".

API details here: opConfig Credential Sets API

Configure the Nodes from the Primary

The following handy BASH script will run an opConfig discovery, which includes update all the nodes and set the required OS Info and credential discovery.  It will run this for all the nodes and it will only run if the node is enabled in opConfig.Connection details for opConfig to operate.

This example would set the details for Cisco devices running NXOS 7.0 using SSH.

This command also sets the Node Context Name and URL, so there is a button in the opCharts GUI and NMIS to be able to access opConfig easily.

You should replace YOURCREDENTIALSET with the relevant credential set for these devices and YOURPOLLERNAMEORFQDN with the FQDN or hostname, e.g. lab-poller.opmantek.net

Code Block
#!/usr/bin/env bash

if [ "$1" == "" ]
then
	echo This script will run opConfig discovery all nodes.
	echo give me any argument to confirm you would like me to run.
	echo e.g. $0 runnow
	exit
fi

NODELIST=`/usr/local/nmis9/admin/node_admin.pl act=list`

for NODE in $NODELIST
do
	# multi line command to make it easier to read.
	/usr/local/omknmis9/bin/opconfig-cliadmin/node_admin.pl act=discoverset node=$NODE \
	entry.activated.opConfig=1 \
	entry.configuration.os_info.os=NXOS \
	entry.configuration.os_info.version=7.0 \
	entry.configuration.connection_info.credential_set=YOURCREDENTIALSET \ 
	entry.configuration.connection_info.personality=ios \
	entry.configuration.connection_info.transport=SSH \
	entry.configuration.node_context_name="View Node Configuration" \
	entry.configuration.node_context_url=//YOURPOLLERNAMEORFQDN/omk/opConfig/node_info?o_node=\$node_name
done

As the example above you could use a filter on the NODELIST statement to filter the nodes.

Sync node data in the Cluster

With many node changes, the best option is to trigger opHA to sync all the node data to the pollers.  You can access the opHA menu, view each poller and click the "Sync all nodes" and sync all the nodes or you can run the following command on the primary.

Code Block
sudo /usr/local/omk/bin/opha-cli.pl act=sync-all-nodes