Child pages
  • opHA Installation and Configuration Guide
Skip to end of metadata
Go to start of metadata
Icon

Please refer to opHAv2 instructions

opHA2 Installation and Configuration Guide

Icon

These instructions are deprecated and only for the old version 1.

Installation Prerequisites

  • The individual performing this installation has some Linux experience.
  • NMIS8 is installed on the same server where opHA will be installed
  • NMIS8 is installed in /usr/local/nmis8
  • opHA will be installed into /usr/local/nmis8
  • Root access is available (not always needed but much easier)
  • Perl 5.10 
  • RRDtool 1.4.7
  • NMIS 8.3.24G or later
  • opHA will be installed onto the Master and each Poller NMIS server

Installation Steps

Install CPAN Libraries

NMIS optionally uses a UUID for devices, this is enabled by default with using opHA, the Perl CPAN library Data::UUID will be required, this is described in the article Using Unique Identifiers (UUID) for NMIS Nodes.  You can install this using CPAN

Install opHA

This step will be repeated for each NMIS master and poller server

  • Copy the opHA tarball to the poller or master NMIS server (a tarball is a GZIP'd tar file, e.g. opHA-1.1.tar.gz)
    1. You may need to use SCP or FTP to get the file onto the server.
  • The file will now likely be in the users home directory.
  • If the installation directory does not already exist
  • Change into the directory where the tarball was copied
  • Untar the file

Optional Step - Install or Patch

If this is a fresh installation, copy the following files as samples, if this is an existing installation and you are upgrading, you do not need to do this step.

Using JSON for NMIS Database

Supported in opHA is having NMIS use JSON for its database, this will require NMIS 8.4.8g or greater.  This should be enabled on all servers running in an opHA cluster.  The following needs to be run on every master and poller server in a cluster and this should be co-ordinated to run very close together.

This script will stop NMIS polling, convert the database files, update the NMIS configuration to use the new database format, then start the polling again.

opHA Authentication Model

opHA has a simple yet strong authentication model, to prevent unwanted access to NMIS data.

The poller is configured with:

  • An NMIS user and password, by default this is an Apache htpasswd file, defined in /usr/local/nmis8/conf/users.dat
  • An NMIS user, with associated privileges, defined in /usr/local/nmis8/conf/Users.nmis
  • An NMIS user to use for the authentication policy enforcement, defined in /usr/local/nmis8/conf/Config.nmis
  • Server Community, which the server must use to request data.

The master is configured with (for each poller):

  • An NMIS user and password, which needs to match the poller configuration
  • A poller/server community, which needs to match the poller configuration.

This model enables you to use separate credentials for each poller or the same credentials for each poller, providing for simple configuration, and more secure configuration if required.

All communications between master and poller can be done over SSL if required, this is supported by configuring your server HTTPD to support SSL and then configuring the master, poller communications to use HTTPS.

opHA Poller Configuration

This configuration will be done on each NMIS Poller Server.  By default, the shared community for a poller is "secret" if you want to change this to something specific you can edit the NMIS Configuration item "poller_community" using your favourite text editor, edit this line and change secret to your desired opHA community string.

Verify that the Apache user has been configured for master functions.  The default userid is "nmismst" and the file /usr/local/nmis8/conf/users.dat should include an entry like

 

opHA Master Configuration

Server Name for opHA

Server names need to be lower case with no spaces, e.g. NMIS_Server24 is bad, nmis_server24 is good.

Adding Pollers to Servers.nmis

Once the pollers have been setup, you can configure the master with each of its pollerss.  This is done by editing the file /usr/local/nmis8/conf/Servers.nmis, and adding a section for each server.

The default entries look like this:

Edit the entry to look like this, in this example the hostname of the poller is "vali":

There are many options in this configuration but unless you are wanting to change the defaults considerably most of them will not matter.  If you wanted to use HTTPS to connect between the master and the poller, you could use https as the protocol and update the port accordingly.  You can use different user and passwd permissions here.

If you were presenting the Poller and needed to use an alternate connection, e.g. through a reverse proxy for presenting a portal, you would modify the portal_protocol, portal_port and portal_host accordingly.

Promoting NMIS to be a Master

By default, an NMIS server operates in standalone mode (which is also poller mode), to have NMIS behave in a masterly fashion, you will need to modify the configuration, so you can edit the NMIS Configuration item "sever_master" using your favourite text editor, edit this line and change from "false" to "true".

Adding Poller Groups to Master

On each poller you will need to determine which groups are currently in use.

This will result in a list of groups which need to be added to the NMIS Master, edit /usr/local/nmis8/conf/Config.nmis and add these groups to that list, this is a comma separated list.

You can also use the admin script /usr/local/nmis8/admin/grouplist.pl on the master to find and patch all groups used by all devices imported from the pollers.   

Once opHA has succesfully pulled/pushed the devices from poller to master you can analyse and patch the groups list by using the following.

grouplist.pl usage

 

Limiting Master Group Collection

opHA supports Multi-Master, that means you can have several masters collecting information from the same pollers if required.  This could be especially useful if you wanted to have one master with all groups on a poller, and another master with different groups from different pollers, effectively sharing some information between groups.

To do this you use the group property in the Servers.nmis file.  Edit the file and add the group property in and a regular expression in for the groups, this will take the form

This will match all groups contain the sub-strings, Brisbane, Boston or Saratoga.  A complete server entry would look like this.

Test Master Collection

You can verify if the master is collecting data from the pollers by running this command

Server Priority

To handle devices being managed by more than one server with some determinism, there is a new feature in opHA 1.4 for server priority.  By default a master server is priority 10 and a poller is priority 5, if you have two pollers managing the same nodes and you want poller1 to be used as the primary source of information, set the server priority in the Servers.nmis file to be higher than on poller2, or conversely lower the priority on poller2.

This works with the master as well, with the master server being a higher priority by default.  The master priority is set with the NMIS configuration option, master_server_priority and is 10 by default.

 

Running a Master Collection

You can optionally have the NMIS polling cycle do the master collection, or you can run it separately from Cron.  If you want to have it seperate which is a good option, change the following NMIS configuration item nmis_master_poll_cycle to be false in the file /usr/local/nmis8/conf/Config.nmis:

Then add this line to the crontab which runs your nmis collections.

This will get your collections running every 2 minutes regardless of other polling.  There is also an option called master_sleep which is so that your type=update and type=master can run every 1 minute and still have data, the default offset is 15.

Conclusion

After refreshing the web pages on the NMIS Master server you will see the data from the pollers.

  • No labels