Introduction

Open-AudIT can be configured to use 'collector' servers to ease the processing burden of discovering large networks. The Open-AudIT Collectors are simply other servers which have Open-AudIT installed, but which use a central Open-AudIT server (the 'primary' server), for their database. Users in the configuration described can use any Open-AudIT server (primary or Collector) to access Open-AudIT.

Assumptions

We have a primary Open-AudIT server on 192.168.1.1 and a Collector Open-AudIT server on 192.168.2.2.

Open-AudIT is installed on both servers and functioning.

The Collector Open-AudIT server's ip address could be that of a load balancer device.

The load balancer should forward the request/response to an appropriately configured Open-AudIT Collector - of which there may be several.


Network Requirements

The described configuration does not account for any firewall rules that may be on the Open-AudIT servers or on any other devices in between them.

The Collector will need to be able to communicate with the primary over the standard MySQL port of 3306.

The Collector will require network access using IPMI, SNMP, WMI and SSH ports to the target devices.

Target devices will require port 80 access to the Collector (in the case of an audit script being run on the target and needing to have its data returned to the Collector).


Configuring the Load Balancing Device

This is very device specific and is left to the customer to implement for their specific environment.

The load balancer should use an appropriate algorithm (round robin, load, etc) to determine which Collector should receive the next request.


Configuring Open-AudIT

Configuring the Primary Open-Audit server.

Ensure MySQL is listening on all addresses or at least the primary ip address used for Collector communication.

We need to either comment out or set the bind ip address in the MySQL configuration file.

On Debian/Ubuntu the configuration file is /etc/mysql/my.cnf, while on RedHat/Centos it isĀ /etc/my.cnf.


bind-address = 192.168.1.1


Then restart the MySQL service.

service mysqld restart

We need to ensure the MySQL 'openaudit' user can connect from any external address on the primary Open-AudIT server.

Could also lock it down further by specifying a wildcard address as '192.168.%', if required.

UPDATE mysql.user SET Host = '%' WHERE user = 'openaudit';
UPDATE mysql.db SET Host = '%' WHERE user = 'openaudit';
FLUSH PRIVILEGES;

Configuring the Collector Open-AudIT Server(s).

We now need to configure the Open-AudIT Collector to use the primary server database.

edit /usr/local/open-audit/code_igniter/application/config/database.php

Change the $db['default']['hostname'] = "localhost"; line to use the address of the primary server, thus:

$db['default']['hostname'] = "192.168.1.1";


All done!



How it Works

The primary Open-AudIT server will have all scheduling of Discovery and the initial stage of all Discovery runs performed on it. The primary or Collector servers can be used for normal Open-AudIT operation. All data is stored on the primary Open-AudIT Server's database.


  1. Discovery run is commenced.
  2. Primary Open-AudIT Server runs the Nmap (discover_subnet.sh) script locally. When the script is initiated, provides the ip address of the Collector Open-AudIT server to send results back to.
  3. The script runs and performs a ping scan of the requested subnet.
  4. Each responding device is in turn scanned for open ports.
  5. Upon a completion of a target device scan, the script submits the device details to the Collector Open-AudIT server.
  6. The primary Open-AudIT server is still running the subnet discovery script at this point and continues on to the next responding device from the subnet.
  7. The Collector server accepts the subnet scan result and proceeds to IPMI, SNMP, SSH audit and/or Windows audit the target device. Collector will talk directly to non-computer devices. Computers will have am initial script copy from the Collector, then return the result in a separate conversation to the Collector via the load balancer.
  8. The Collector server processes all data while using the database of the primary server, instead of a locally hosted database.
  9. The primary server's Discovery script reaches the end of it's Nmap scanning (sending each found device to the Collector) and finishes.
  10. Discovery run is completed.

Caveats

The described configuration does not offload the initial Nmap scanning of target networks. This could also be alleviated by configuring individual discovery runs on each individual Collector (using Open-AudIT Enterprise). This is obviously a trade-off in the ease of managing the discovery schedule.


Diagram

The basic's of the communication flow (as per the list above) is shown below (click for a larger image).