Page tree

Versions Compared


  • This line was added.
  • This line was removed.
  • Formatting was changed.

Table of Contents


Version: 3.3.3

The opHA 3 has a cli tool to perform the same operations than the CLI, but with some additional debugging information and it also allows the task automation. 

Code Block

Usage: act=[action to take] [options...] act=discover url_base=... username=... password=.... role=... mirror=... act=<import_peers|export_peers|list_peers> act=delete_peer {cluster_id=...|server_name=...} act=pull [data_types=X...] [peers=Y] [force=t]
	    pull data types except nodes
	    primary <-- peers act=sync-all-nodes [peers=Y]
	    sync all node data from the primary to the pollers
	    primary --> peers act=sync-processed-nodes
	    sync node data based on changes done by opnode_admin
	    primary --> peers act=import_config_data
	    for firsts installation, provide initial data (groups) act=cleanup simulate=f/t
	    clean metadata and files act=clean_orphan_nodes simulate=f/t
	    remove nodes with unknown cluster_id act=resync_nodes peer=server_name
	    remove the nodes from the poller in the primary and pull the nodes from the poller
	    primary <-- peers act=clean_data peer=server_name [all=true]
	    Like resync data but with all the data types
	    primary <-- peers
	    By default, cleanup just pull data
	    all=true includes nodes act=cleanup_poller simulate=f/t
	    from the pollers, clean duplicate configuration items and files act=check_duplicates
	    check for duplicate nodes act=get_status act=setup-db act=show_roles act=data_verify act=lock_peer {cluster_id=...|server_name=...} act=unlock_peer {cluster_id=...|server_name=...} act=peer_islocked {cluster_id=...|server_name=...}


To get debug information in any command, please run with the following argument:


E.g. act=resync_nodes peer=server_name debug=8

Core functionality

Discover Peer


Code Block act=import_config_data

Cleanup Functions


Function to clean metadata for files and files with no metadata information. This is mainly for configuration files:

Code Block act=cleanup

By default, it will run in simulation mode. 

Use simulate=f to perform the cleanup function.

clean orphan nodes

It is possible to check with nodes are not associated with any cluster id with the command:

Code Block act=clean_orphan_nodes simulate=f/t

By default, it will run in simulation mode. 

Use simulate=f to remove the nodes (And associated data).

resync nodes

By default, the Primary pushes the nodes to the pollers. .Running this command, it is possible to update the nodes from the pollers:

Code Block act=resync_nodes peer=server_name


  • peer: Specify the server name. 

Clean data

Will remove all the data from the peer and pull the data again....

cleanup poller


check duplicates


By default, it is not removing/resync the nodes. It is possible to do it with:

  • all=true 

Cleanup poller

This operation should be run on a poller. And will clean duplicate configuration items and files:

Code Block act=cleanup_poller simulate=f/t

By default, it will run in simulation mode. 

Use simulate=f to remove the nodes (And associated data).

Diagnosis information

get status

Get all the peer status information as an array of perl hashes:

Code Block act=get_status

This is the same information that we see in the opHA front page...

Setup DB


Show roles


Data Verify


Lock Peer


Show roles

Show the roles defined in the system:

Code Block act=show_roles

Data Verify

Show how many data do we have for each peer:

Code Block act=data_verify

How many inventory records, roles, which peer is active or enabled, also duplicate nodes and catchall inventory records duplicated. 

Check Duplicates

Code Block act=check_duplicates

Similar to data_verify, but will report just the duplicate data. 

Lock Peer

(V. >= 3.3.3) When a peer is doing a critical operation, it will be locked. We can see the lock status of a peer:

Code Block act=lock_peer {cluster_id=...|server_name=...} act=unlock_peer {cluster_id=...|server_name=...}

We can change the lock status of a peer with:

Code Block act=peer_islocked {cluster_id=...|server_name=...}

Setup DB

Setup DB indexed. This is run by the installer during installation or upgrade:

Code Block act=get_status