Page tree
Skip to end of metadata
Go to start of metadata


Managing and monitoring a large network of devices requires a scalable, high availability and easy to manage solution. opHA 3.0.5 brings a new feature that allows to centralise the configuration files from the Primary and send it to the pollers. The configuration files can be applied in NMIS and OMK. This partial configuration files would override the configuration that the poller has. 


  • NMIS 9.0.6 is required
  • opHA version 3.0.5
  • The pollers also need to be updated with at least version 3.0.5 of opHA

How does it work

opHA 3.0.5 has the main key points:

  • The first step is create a file from a template (NMIS or OMK)
  • Validate and save the file
  • Create a group. By default, the following groups are created:
    • pollers: With contains all peers
    • Primary: With the local machine
  • Assign peers to a group
  • Assign group to a configuration file
  • Push the configuration file: The file will be send to the peers and restart the daemons when needed (You will see a message when it is necessary). 

Also, you can also map a role with a peer. By default, the following roles are available: 

  • 'Poller', 'Primary', 'Portal', 'Local'

And this roles are assigned: 

  • Primary: Server Local 
  • Poller: All existing peers available

Configuration files

View/Edit configuration files

Accessing from the menu Views > Configuration, we can see a list of the configured files.

Create a new configuration file

We can create a new configuration file clicking on the button "New Configuration File":

We would need to introduce:

  • The file type: If NMIS or OMK. It is very important to select the right type, as it is going to be applied on different products. 
  • The file subtype: We can choose the subtype depending on the type. 
  • The file name (Cannot be empty and had .nmis extension). 

Once we select a type, we would see a template loaded into the editor. A generic templated will be loaded if the file is not opCommon (OMK) or Config (NMIS).

IMPORTANT considerations:

  • The file is edited in json format, but it is being saved as a perl hash. We can download the file as it is being saved from the Download button. 
  • We can remove/add sections when we have selected the Section "all". 
  • We can validate the file before being saved. If it is not valid, we will see the output on the console on the bottom. 
  • By default, once the file is saved, it is going to create a backup file, with a maximum of two.

Push a configuration file

Accessing from the menu Views > Configuration, we can see a list of the configured files. Pressing the Peer Group > Push button,  we can push a configuration file to the configured groups.

A note will be display when some daemons would need to be restarted. 

Once a configuration is Push, you will be able to see the Log as the result pressing on the status button:

Remove a Configuration file

Accessing from the menu Views > Configuration, we can see a list of the configured files. Pressing the  > Remove button,  we can remove file from the peers where the file was successfully sent.

View the resulting configuration

We can see the result configuration of a peer files from the menu "View Configuration file":

Select the peer:

And select the file from the file browser:


Add a peer to a group

We can add a peer to a group on the button Peers Group from the Configuration menu. 

In this screen, we would see all the groups available and the peers added to the group. 

To edit the group members, we need to select the group, click Edit, and change the group members. Then, press Save

To edit the groups, we need to press the button Edit Groups

Create a new Group / Edit Groups

We can edit groups on the button Edit Groups from the Peers Groups menu. 

You can edit, add or remove existing groups. 

Please be aware that, if you remove a group, all group associations will be lost. 

Assign a Group to a Config file

Accessing from the menu Views > Configuration, we can see a list of the configured files. Pressing the Peer Group > Edit button,  we can add groups to a configuration file to be send that file.


Role Mapping

We can assign a peer to a role on the button Role Mapping from the Configuration menu:

We can add new mappings, edit or remove existing. Note that if a peer has a role assigned, it is not going to appear in the add button, you will need to edit. 

What Central Managed Means 

Please note that once we change NMIS or OMK configuration from the Primary, it is not intended to be edited from the own poller. 

If a peer role is set to be a poller, opHA menu is not going to be available:

When we update the configuration in NMIS from a Primary, we are going to see a message on the NMIS configuration screen, and we are not going to be able to update the configuration from NMIS. 

Restoring a Backup

By default, two backup files are saved in the poller in the directory <nmis>/backups or <omk/conf/conf.d/backups>. This configuration can be changed in the configuration file:

  • opha_backup_master_location

The number of backup files can also be changed in the configuration file, modifying the configuration item:

  • opha_max_backup_files

Restoring a backup should be done manually, in each poller where the restoration needs to be done, following the steps:

  • Go to the backup directory in the Primary. By default:
    • NMIS: <nmis>/backups
    • OMK: <omk/conf/conf.d/backups>. 
  • The backup file should be file_name.nmis.version. For example, authentication.nmis.3
  • Rename the file, removing the version. For example, authentication.nmis.
  • Move the file to the external configuration folder. By default:
    • NMIS: <nmis>/conf/conf.d
    • OMK: <omk>/conf/conf.d
  • For some of the changes to take effect, the corresponding daemon should be restarted. For example:
    • service nmis9d restart

Cleanup Utilities

opHA has two utility tools to cleanup orphaned files and orphaned metadata: 

  • <omk>/bin> ./ act=cleanup : The purpose is cleaning Primary files and metadata. Must be run from the Primary. Will check for:
    • Metadata without a file associated in the file system
    • Orphan files in the file system
    • Backup files 
  • <omk>/bin> ./ act=cleanup_poller : The purpose is cleaning poller files. Will check for:
    • Orphan backup files from OMK. 
    • Check for duplicated configuration items in OMK: This means that there is the same configuration item in different configuration external files. This could led us to errors, as the one which is loaded first is going to override the other. 
    • Check for duplicated configuration items in NMIS. 
  • <omk>/bin> ./ act=clean_orphan-nodes : Will remove the nodes with no cluster_id associated. Will ask for confirmation for each node if simulate=f is specified. 

Important: cleanup utilities run in simulation mode by default. It is a good practice to run as simulation to check all the files that are going to be removed. simulate=f will remove all files and metadata. 

New configuration Items

These are new configuration values: 

  • opha_conf_templates_url: '/install/templates'
  • opha_backup_master_location: $self->{app_path}/conf/conf.d/backups
  • opha_master_config_location: $self->{app_path}/conf/conf.d
  • opha_conf_files_url: $self->{app_path}/conf/peers
  • opha_max_backup_files: 2
  • opha_restart_nmis_needed_sections: The sections for NMIS that require a daemon update.
  • opha_restart_omk_needed_sections: The sections of the configuration for OMK that require a daemon update.
  • opha_config_file_types: ['NMIS', 'OMK']

New configuration items for nmis:

  • '<nmis_conf_ext>' => '<nmis_conf>/conf.d'


Once a peer is edited from opHA, is important to know: 

  • Backup files are saved, but it is not possible to rollback from the GUI. By default, configuration files have 2 backup files. 
  • Once you edit NMIS configuration, it is not possible to edit from the GUI. You will see a message when there are some configuration files overriding the local configuration. 
  • If the daemon is going to be restarted, it is not going to see the result on the log. But we can check the opHA landing page to see the daemons state.