Version: 3.3.3
The opHA 3 has a cli tool to perform the same operations than the CLI, but with some additional debugging information and it also allows the task automation.
/usr/local/omk/bin/opha-cli.pl Usage: opha-cli.pl act=[action to take] [options...] opha-cli.pl act=discover url_base=... username=... password=.... role=... mirror=... opha-cli.pl act=<import_peers|export_peers|list_peers> opha-cli.pl act=delete_peer {cluster_id=...|server_name=...} opha-cli.pl act=pull [data_types=X...] [peers=Y] [force=t] pull data types except nodes primary <-- peers opha-cli.pl act=sync-all-nodes [peers=Y] sync all node data from the primary to the pollers primary --> peers opha-cli.pl act=sync-processed-nodes sync node data based on changes done by NMIS9 node_admin.pl primary --> peers opha-cli.pl act=import_config_data for firsts installation, provide initial data (groups) opha-cli.pl act=cleanup simulate=f/t clean metadata and files opha-cli.pl act=clean_orphan_nodes simulate=f/t remove nodes with unknown cluster_id opha-cli.pl act=resync_nodes peer=server_name remove the nodes from the poller in the primary and pull the nodes from the poller primary <-- peers opha-cli.pl act=clean_data peer=server_name [all=true] Like resync data but with all the data types primary <-- peers By default, cleanup just pull data all=true includes nodes opha-cli.pl act=cleanup_poller simulate=f/t from the pollers, clean duplicate configuration items and files opha-cli.pl act=check_duplicates check for duplicate nodes opha-cli.pl act=get_status opha-cli.pl act=setup-db opha-cli.pl act=show_roles opha-cli.pl act=data_verify opha-cli.pl act=lock_peer {cluster_id=...|server_name=...} opha-cli.pl act=unlock_peer {cluster_id=...|server_name=...} opha-cli.pl act=peer_islocked {cluster_id=...|server_name=...} Encryption key opha-cli.pl act=push_encryption_key |
To get debug information in any command, please run with the following argument: debug=1..9 E.g. opha-cli.pl act=resync_nodes peer=server_name debug=8 |
You can discover a new peer with the following command:
opha-cli.pl act=discover url_base=... username=... password=.... role=... mirror=... |
Where:
We can import, export and list all the peers information with:
opha-cli.pl act=<import_peers|export_peers|list_peers> |
It is possible to delete a peer with the command:
opha-cli.pl act=delete_peer {cluster_id=...|server_name=...} |
This command will remove the peer and all the associated data: Nodes, inventory, latest data, etc.
We need to specify OR the cluster_id OR the server name.
With pull, we will sync the inventory, latest data, events, status and registry data.
opha-cli.pl act=pull [data_types=X...] [peers=Y] [force=t] |
Where:
The pull is running in the opha cron job.
When we pull from a mirror, if its poller is active, just the registry and status data will be pulled.
It will happen the opposite: If the mirror is active, the poller data won't be pulled.
To synchronise the nodes, we can run the polling:
opha-cli.pl act=sync-all-nodes [peers=Y] |
Where:
The sync-all-nodes is running in the opha cron job.
Will sync the nodes processed by NMIS9 node_admin.pl:
opha-cli.pl act=sync-processed-nodes |
for firsts installations, provide initial data, basically setup the groups for pollers and primary and add the peers.
opha-cli.pl act=import_config_data |
Function to clean metadata for files and files with no metadata information. This is mainly for configuration files:
opha-cli.pl act=cleanup |
By default, it will run in simulation mode.
Use simulate=f to perform the cleanup function.
It is possible to check with nodes are not associated with any cluster id with the command:
opha-cli.pl act=clean_orphan_nodes simulate=f/t |
By default, it will run in simulation mode.
Use simulate=f to remove the nodes (And associated data).
By default, the Primary pushes the nodes to the pollers. Running this command, it is possible to update the nodes from the pollers:
opha-cli.pl act=resync_nodes peer=server_name |
Where:
Will remove all the data from the peer and pull the data again.
By default, it is not removing/resync the nodes. It is possible to do it with:
This operation should be run on a poller. And will clean duplicate configuration items and files:
opha-cli.pl act=cleanup_poller simulate=f/t |
By default, it will run in simulation mode.
Use simulate=f to remove the nodes (And associated data).
Get all the peer status information as an array of perl hashes:
opha-cli.pl act=get_status |
This is the same information that we see in the opHA front page.
Show the roles defined in the system:
opha-cli.pl act=show_roles |
Show how many data do we have for each peer:
opha-cli.pl act=data_verify |
How many inventory records, roles, which peer is active or enabled, also duplicate nodes and catchall inventory records duplicated.
opha-cli.pl act=check_duplicates |
Similar to data_verify, but will report just the duplicate data.
(V. >= 3.3.3) When a peer is doing a critical operation, it will be locked. We can see the lock status of a peer:
opha-cli.pl act=peer_islocked {cluster_id=...|server_name=...} |
We can change the lock status of a peer with:
opha-cli.pl act=lock_peer {cluster_id=...|server_name=...} opha-cli.pl act=unlock_peer {cluster_id=...|server_name=...} |
Setup DB indexed. This is run by the installer during installation or upgrade:
opha-cli.pl act=setup-db |
The primary can push the encryption key to all the pollers by running the following command:
opha-cli.pl act=push_encryption_key |
It will run just if the server has the role primary and the key was not modified since the last time it was sent.
To force send it anyway, you can run it with the force=true argument:
opha-cli.pl act=push_encryption_key force=1 |