Child pages
  • General Troubleshooting Checklist
Skip to end of metadata
Go to start of metadata

Lessons learned from support cases - common things to look for

Does DNS function properly?

If not any daemon that's doing name resolution will be very slow.   Verify the system has an FQDN and resolves to itself.  Also check if it can resolve other hosts.

DNS is Important

NMIS/OMK applications expect DNS to work.  Managing individual /etc/hosts files does not scale.  opHA is one module in particular where this is critical.  If the customer does not have a local DNS server for internal hosts consider running BIND on the NMIS master server, other NMIS/OMK servers can use it as a name server.  This is not difficult to do and will save a lot of troubleshooting time moving forward.

Does the system have the correct time?  Is it synced with a time server?

 

Compare the system UTC time with actual UTC time.  A site such as https://time.is/UTC will show current UTC time.

If the system time is not correct it will result in a lot of problems.

  • Time stamps not correct on events
  • Graph data not correct
  • Transactions with other systems fail (e.g. cookies could already be expired at the time of issue.)

Perl Modules

If NMIS or OMK applications can not locate a perl module it may be missing or it may have the wrong file permissions.  Also check directory file permissions.

NMIS Troubleshooting

Node Troubleshooting

Is the node reachable?

Ping it with a big echo request.

What does nmap think about it?

 

 

Manual Update & Collect Actions

If a node isn't providing the data we think it should sometimes looking at manual update & collect debugs is helpful.  Redirect or tee the output to a file in order to review latter. 

 

Email alerts

Contacts.nmis must have the correct DutyTime format.

External Authentication

conf/Config.nmis must have the proper auth_method order as well as that method being provisioned.

If LDAP isnt working tcpdump can be used to see the response code from the LDAP server.

Long collect times

Are we collecting many interfaces that are not necessary?

Check the view.json file for number of interfaces and interface type.  Look for common things such as interface type and description.  Use models or Config.nmis to disable collection.

Syslog

When troubleshooting syslog issues the following script will gather more rsyslog daemon information then the nmis support tool.

getSyslogData.sh

snmptrapd

When troubleshooting snmptrapd issues the following script will gather more snmptrad daemon information then then nmis support tool.

getSnmpTrapdInfo.sh

Models

When troubleshooting models it's important to know if all the OID's that have a 'friendly name' are referenced within Model files have been defined in /usr/local/nmis8/mibs/nmis_mibs.oid.  Some Model files import or call other Model, Graph or Common files.  If an OID 'friendly name' has not been defined in nmis_mibs.oid it may not be obvious which model file is causing the problem.  In order to validate friendly names more easily the script below has been provided.  It will parse all the OID friendly names out of the model files and look for them in nmis_mibs.oid.  If they are not found the operator will be notified.   At some point this script should be converted to perl; this would make it much faster.

checkOid.sh

opCharts Troubleshooting

TopN

Use the following utility to troubleshoot why charts are being populated into TopN

RBAC (Role Based Access Control)

General scheme.

  • Create role.
  • Create user and assign a role.
  • Create an object and assign a privilege tag.
  • Assign the privilege tag to a role.

Based on this the following script was created to pull all the role, user, object and privilege data out of a customer system.

getRbacInfo.sh

opEvents Troubleshooting

General

Grep for the following in opEvents.log:

  • Event ID
  • State Object ID

Event not found

Look in the raw log.

If an event is skipped due to old age, but the time looks correct, check to see if the opeventsd was running at the time the event was received.

Event Processing

When troubleshooting event processing it's useful to understand the order that the various opEvents configuration files are processed in and the general function of each one.

State

When troubleshooting state it's important to realize that event.event and event.stateful are two completely different things.  event.stateful is referred to as 'State Type' in the node context view.  State is tracked based on event.stateful only, state status is generally up or down and may be found in the value of event.state.

EventParserRules.nmis provides the ultimate in flexibility in allowing the user to dictate what event.stateful and event.state will be presented to opEvents.  For example event.event can be a completely different value then event.stateful.

  • event.event=Apple; event.stateful=Banana; event.state=up
  • event.event=Orange; event.stateful=Banana; event.state=down

With this in mine always confirm event.stateful when troubleshooting state inconsistencies.

Poller/Master State Mismatch

If state has been lost between the poller and master servers check to see if a correlation rule has fired suppressing the more specific event. 

If the issue is not related to a correlation rule look for the corresponding event on the poller.  In the event context check the 'Actions taken for event' section.  Was a script executed that would have sent the event to the master?  Was it successful, what was the exit code?

opFlow Troubleshooting

If flows are not rendering in the opFlow GUI take the following actions.

Check Log Files

Review the log files in /usr/local/omk/log. 

  • opFlow.log
  • common.log
  • opDaemon.log

Verify Flow Data is Received

using tcpdump we can verify that flow data is being received by the server.  This example uses the default opFlow UDP port of 9995.  Specify the specific host that needs to be verified.

When we see output such as the example above we know this server is receiving flow data from the network device.

Check the Flow Data

The next step is to ensure the host in question is providing valid data that nfdump can process.  Move to the /var/lib/nfdump directory and look for nfcapd files that end in a datestamp.  The datestamp denotes the time the capture file was started.  Select a file that is likely to contain samples from the host we with to verify and execute the following command.

Now view the new text file with less or a text editor.  It will provide flow records such as the following.  The 'ip router' field denotes the source router for this flow sample.

Look for things are are not correct in the flow record.  The following issues have been found in past support cases.

  • input/output:  These fields should be the SNMP index number of the input or output interfaces.
  • first/last:  This is a timestamp that the router assigns.  It's important that the router time is in sync with opFlow time.  opFlow uses this time to calculate statisitcs.  For example, if the router time is an hour earlier than the server time, opFlow will not display the data until the server time catches up with the router time.

 

 

omkd Troubleshooting

If mongod is not running omkd will never start. Ever.

OMK General

Node synchronization with NMIS

Generally customers trust the node data that NMIS learns dynamically and they use this to automatically update the node data for OMK applications.  It's a good idea to install a cron job that automates this synchronization periodically.  The following commands work well for opEvents and opConfig respectively.

Configuration Files

If it's suspected that a particular configuration file is causing a problem, one technique to isolate the problem follows.

  • Backup the suspect configuration file
  • Copy the default configuration file from omk/install into omk/conf
  • Restart the associated daemons and test

 

  • No labels