Versions Compared

Key

  • This line was added.
  • This line was removed.
  • Formatting was changed.

...

  • ABI 1 (NMIS 8)

    • In omk/conf/opCommon.nmis:
      • omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
        If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers
      • /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)


  • ABI 2 (NMIS 9)

    • In omk/conf/opCommon.json:
      • omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
        If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers
      • /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)
    • in nmis9/conf/Config.nmis:
      • /system/nmisd_worker_max_cycles: 100


  • Consider using zswap, with its default settings, provided the server has more than 1GB RAM

    • https://www.kernel.org/doc/html/latest/vm/zswap.html provides that:
      • Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and attempts to compress them
        into a dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles for potentially reduced swap I/O.
        This trade-off can also result in a significant performance improvement if reads from the compressed cache are faster than reads from a swap device.
      • Zswap is a new feature as of v3.11 and interacts heavily with memory reclaim. This interaction has not been fully explored on the large set of potential configurations and workloads that exist.
        For this reason, zswap is a work in progress and should be considered experimental.
      • Overcommitted guests that share a common I/O resource can dramatically reduce their swap I/O pressure, avoiding heavy handed I/O throttling by the hypervisor.
        This allows more work to get done with less impact to the guest workload and guests sharing the I/O subsystem.
      • Users with SSDs as swap devices can extend the life of the device by drastically reducing life-shortening writes.
    • Performance Analysis of Compressed Caching Technique
      • See the CONCLUSION of this paper for insights as to why zswap should not be used on a server with less than 1GB RAM.
    • https://www.ibm.com/support/pages/new-linux-zswap-compression-functionality provides that for a server with 10GB RAM and 20% zswap pool size: 
      • For the x86 runs, the pool limit was hit earlier - starting at the 15.5 GB data point
      • On x86, the average zswap compression ratio was 3.6
    • Average zswap compression ratio of 3.6 ties in with the 15.5 GB data point a which real disk swap started (started  as follows
      (as given in https://www.ibm.com/support/pages/new-linux-zswap-compression-functionality) as follows:
      • 10 GB RAM with 2 GB zswap pool =
        • 8 GB available RAM + 2  GB zswap pool =
          • 8 GB available RAM + (2  GB zswap pool * 3.6 average zswap compression ratio) =
            • 8 GB available RAM + 7.2GB zswap compressed RAM =
              • 15.2 GB RAM
    • For zswap to function correctly it needs swap space equivalent to the uncompressed RAM zswap may compress
      • To cater for a zswap compression ratio of 5, with the default 20% zswap pool (1/5th), one would ensure swap space equivalent to installed RAM: Server with 10GB RAM + 10GB swap space
        • A good rule of thumb when enabling zswap is to ensure the server has as much swap space as it has installed RAM.
    • Don't be tempted to increase maximum pool percent from the default setting of 20: this will most likely affect performance adversely.
    • Command to view zswap info during operation when enabled:
      • sudo grep -rRn . /sys/kernel/debug/zswap/


...