Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

Please, notice that in case the server has nodes already, the nodes should be exported and imported again with localised_ids once the cluster_id is changed, as the nodes information won't have the same cluster_id attribute and they will be treated as remote nodes (They cannot be edited, or polled, as an example). 

Code Block
localise_ids=true

(default: false), then the cluster id is rewritten to match the local nmis installation

After the change, omkd daemon needs to be restarted.

...

opHA uses user/password to access the registry data from the poller, but once the poller has been discover, it uses a token for authentication. So, we should have enabled the authentication method "token" in the poller. 

Check if in om<omk_dir>/conf/opCommon.nmis json we have the following (Being X 1, 2 or 3, not matter the order): 

...

From the Primary, we can initiate discovery  discovery of a peer using the url url https://servername.   (using SSL/TLS).

...

This can be set in <omk_dir>/conf/opCommon.nmis json in the poller:

Code Block
"opha_url_base" : "https://servername.domain.com",  

...

If the request is taking too long, we cat can decrease the number of elements for each datatype. 

...

Ensure that the opHA API user is configured to be the same as the peer setup, the user should exist in the NMIS Users.nmis file and have permissions assigned, by default this is set to omkapiha, check /usr/local/omk/<omk_dir>/conf/opCommon.json

Code Block
"opha_api_user": "omkapiha",

...

Code Block
systemctl restart omkd


Note: This error can also occur if you upgrade opHA and do not accept the EULA on the pollers. Double check the status of the pollers from the main opHA dashboard on the primary.

Connection error: SSL connect attempt failed error

...

In this case, the SSL certificate is likely to be a local certificate authority (CA), or you might be using self-signed SSL certificates, in which case you will need to let the applications know it is OK.

In On the primary server add change the following configuration option to /usr/local/omk/reflect the same as below to <omk_dir>/conf/opCommon.json in the opHA section.

Code Block
"opha_allow_insecure" : "1",

You may also need to enable the below as well on the primary server:

Code Block
"omk_ua_insecure" : "1",

After changing restart the daemon.

Code Block
systemctl restart omkd

Connection error: there is an authorization problem

In the GUI, you observe the following error:

Image Added

You should ensure that the opHA API user that is defined in opCommon.json on both the Primary/Main Primary and the poller(s) is the same user, and that this user exists in the Users.nmis table. By default the configured user is "omkapiha".

Code Block
"opha_api_user" : "omkapiha",


Teapot error: error saving node to remote

in the GUI if you observe the following error:

Image Added

Check the /usr/local/omk/log/opDaemon.log and if you see the following lines:

[debug] current_app_log: bad log, application_key missing

[error] NodeData::update_resource Error creating node in remote. Reason: 418 I'm a teapot

[debug] 418 I'm a teapot (0.127757s, 7.827/s)

Validate that pollers and primary have the same types set in nmis9/conf/Config.nmis for each of the following: 'nodetype_list', 'nettype_list', 'roletype_list'

An easy way to do this is using the patch_config.pl tool:

Code Block
/usr/local/nmis9/admin/patch_config.pl -r /usr/local/nmis9/conf/Config.nmis roletype_list
/usr/local/nmis9/admin/patch_config.pl -r /usr/local/nmis9/conf/Config.nmis nettype_list
/usr/local/nmis9/admin/patch_config.pl -r /usr/local/nmis9/conf/Config.nmis nodetype_list

If mismatched then update, restart daemons (nmis9d and omkd) then rediscover poller.