In the situation that opFlow has gotten too large for the disk/partitions it is installed on the following process can be followed to clean up all flow/conversation data (but keep other data, like reports).
There are two types of data that need to be cleaned up:
- Raw flow data (nfdump files)
- Database data
It's best to shut all daemons down before starting.
Raw flow data
The opFlow installer adds a cron job that cleans these files up. It uses the config variable opflow_raw_files_age_days and purges any raw flows that are older than the number of days specified.
To clean up manually, find the directory the files are saved into by looking at the config file omk/conf/opCommon.nmis:
The default is listed in the code block above. To clean out all files simply delete all files in the directory listed
By default, all opFlow data is stored in a database named 'flows', this is configurable and opCommon.nmis has this setting which defines the dbname used 'opflow_db_name' => "flows".
NOTE about DB size: when re-creating the database using opflow-cli, the config following config variables will be used to determine how large to make the collections 'opflow_db_conversations_collection_size' => '16106127360' (15G), 'opflow_db_flows_collection_size' => 5368709120 (5G). These can be changed or overridden on the CLI by specifying usepercent=NN, which calculates the size the db will use based off of the percentage of of disk specified.
There are two options to clean up database data.
Remove all data, start from scratch. (See note above regarding db size)
Drop flow/conversation data, keep all other data.
First Back up data to keep
Drop the database and re-create it. (See note above regarding db size)
Start all daemons back up.