You are viewing an old version of this page. View the current version.

Compare with Current View Page History

« Previous Version 68 Next »


Exercise caution while editing /usr/local/omk/conf/opCommon.nmis, /usr/local/omk/conf/opCommon.json or /etc/mongod.conf; if a syntax error is induced all OMK applications will cease to function.

Check out How to make configuration changes to opCommon.nmis and other files for some more details on best practices for modifying the configuration file.


Reducing a Server's Memory Footprint

  • NMIS 8

    • In omk/conf/opCommon.nmis:
      • omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
        If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers
      • /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)


  • NMIS 9

    • In omk/conf/opCommon.json:
      • omkd_workers is set at 10 by default. Try reducing this number to 2 and then restart the omkd service.
        If you THEN find YOU need more omkd_workers, increment this value by one and test again until you get to a suitable value for omkd_workers
      • /omkd/omkd_max_requests: 100 to 500 (start at 100 and increase from this value if needed)
    • in nmis9/conf/Config.nmis:
      • /system/nmisd_worker_max_cycles: 100


  • Consider using zswap, with its default settings, provided the server has more than 1GB RAM

    • https://www.kernel.org/doc/html/latest/vm/zswap.html provides that:
      • Zswap is a lightweight compressed cache for swap pages. It takes pages that are in the process of being swapped out and attempts to compress them
        into a dynamically allocated RAM-based memory pool. zswap basically trades CPU cycles for potentially reduced swap I/O.
        This trade-off can also result in a significant performance improvement if
        reads from the compressed cache are faster than reads from a swap device.
      • Zswap is a new feature as of v3.11 and interacts heavily with memory reclaim. This interaction has not been fully explored on the large set of potential configurations and workloads that exist.
        For this reason, zswap is a work in progress and should be considered experimental.
      • Overcommitted guests that share a common I/O resource can dramatically reduce their swap I/O pressure, avoiding heavy handed I/O throttling by the hypervisor.
        This allows more work to get done with less impact to the guest workload and guests sharing the I/O subsystem.
      • Users with SSDs as swap devices can extend the life of the device by drastically reducing life-shortening writes.
    • Performance Analysis of Compressed Caching Technique
      • See the CONCLUSION of this paper for insights as to why zswap should not be used on a server with less than 1GB RAM.
    • https://www.ibm.com/support/pages/new-linux-zswap-compression-functionality provides that for a server with 10GB RAM and 20% zswap pool size: 
      • For the x86 runs, the pool limit was hit earlier - starting at the 15.5 GB data point
      • On x86, the average zswap compression ratio was 3.6
    • Average zswap compression ratio of 3.6 ties in with the 15.5 GB data point a which real disk swap started  as follows
      (as given in https://www.ibm.com/support/pages/new-linux-zswap-compression-functionality):
      • 10 GB RAM with 2 GB zswap pool =
        • 8 GB available RAM + 2  GB zswap pool =
          • 8 GB available RAM + (2  GB zswap pool * 3.6 average zswap compression ratio) =
            • 8 GB available RAM + 7.2GB zswap compressed RAM =
              • 15.2 GB RAM
    • For zswap to function correctly it needs swap space equivalent to the maximum uncompressed RAM zswap may contain in its pool
      • To cater for a zswap compression ratio of 4.0, with the default 20% zswap pool (1/5th),
        one would provision swap space in a server with 10GB RAM installed of 2 GB zswap pool * (4.0-1) = 6.0 GB RAM: Server with 10GB RAM + at least 6.0 GB swap space should then be provisioned
      • One can expect at least a zswap compression ratio of 3.1, so when enabling zswap on a server with 10GB RAM installed, one should provision a minimum of 4.2 GB swap space.
    • Don't be tempted to increase maximum pool percent from the default setting of 20: this will most likely affect performance adversely.
    • Command to view zswap info during operation when enabled:
      • sudo grep -rRn . /sys/kernel/debug/zswap/

    • Real world example 2021-12-22:

      • The following code was placed set in /etc/rc.local to use the best zswap options available,
        since improvements have been made to zswap compression since this article was originally written
        • /etc/rc.local
          #!/bin/sh -e
          #
          # rc.local
          #
          # This script is executed at the end of each multiuser runlevel.
          # Make sure that the script will "exit 0" on success or any other
          # value on error.
          #
          # In order to enable or disable this script just change the execution
          # bits.
          #
          # By default this script does nothing.
          
          # enable zswap:
          echo 20 > /sys/module/zswap/parameters/max_pool_percent||:;
          echo 1 > /sys/module/zswap/parameters/enabled||:;
          # https://linuxreviews.org/Zswap
          sh -c 'CMP=/sys/module/zswap/parameters/compressor;echo zstd >"${CMP}"||echo lz4 >"${CMP}"||echo lzo-rle >"${CMP}"||:;' 2>/dev/null
          # use the best zpool compressed memory pool allocator available:
          sh -c 'ZP=/sys/module/zswap/parameters/zpool;echo z3fold >"${ZP}"||echo zsmalloc >"${ZP}"||:;' 2>/dev/null
          
          exit 0;
        • On Debian 9.13 the following zswap parameters were set from the above /etc/rc.local script

          sudo -i
          chmod +x /etc/rc.local
          source /etc/rc.local
          grep . /sys/module/zswap/parameters/*
          
          /sys/module/zswap/parameters/compressor:lz4
          /sys/module/zswap/parameters/enabled:Y
          /sys/module/zswap/parameters/max_pool_percent:20
          /sys/module/zswap/parameters/zpool:zsmalloc
        • When this server was swapping the following zswap stats were achieved:

          free -h
                        total        used        free      shared  buff/cache   available
          Mem:           2.0G        1.9G         58M        728K         27M         17M
          Swap:          2.7G        1.4G        1.4G
          
          
          sudo sh -c 'D=/sys/kernel/debug/zswap;echo;grep . "${D}/"*;perl -E  "say \"\nCompress Ratio: \".$(cat "${D}/stored_pages")*4096/$(cat "${D}/pool_total_size")" 2>/dev/null'
          
          /sys/kernel/debug/zswap/duplicate_entry:0
          /sys/kernel/debug/zswap/pool_limit_hit:565978
          /sys/kernel/debug/zswap/pool_total_size:287293440
          /sys/kernel/debug/zswap/reject_alloc_fail:593
          /sys/kernel/debug/zswap/reject_compress_poor:436
          /sys/kernel/debug/zswap/reject_kmemcache_fail:0
          /sys/kernel/debug/zswap/reject_reclaim_fail:1596
          /sys/kernel/debug/zswap/stored_pages:360385
          /sys/kernel/debug/zswap/written_back_pages:1145522
          
          Compress Ratio: 5.1370556204920
        • How to interpret the above stats:
          • 'free -h' command provides that 1.4G of swap was used.
            That stat refers to the uncompressed swap:
            '/sys/kernel/debug/zswap/stored_pages * 4096' = '360385 * 4096' = 1476136960 bytes = 1.37476 GB
          • The 1.37476 GB of swap was compressed to '/sys/kernel/debug/zswap/pool_total_size' = 287293440 bytes = 0.26756 GB of RAM.
          • Compression Ratio is therefore '1.37476 GB/0.26756 GB' = 5.13706

        • This VM's swapsize was set knowing the compression ratios zswap achieved on this VM.

          To allow zswap to achieve a compression ratio of 5.13706,
          zswap needs swap space equivalent to the maximum uncompressed RAM zswap may contain in its pool when compressing at compression ratio of 5.13706.

          Consequently, the minimum swap space this VM should then have is:
          • '2 GB RAM * /sys/module/zswap/parameters/max_pool_percent * Compression Ratio'
            = '2 GB RAM * 20% * 5.13706'
            = 2.055 GB swap

        • However, the reason why the swap on this VM was actually set at 2.75 GB is that the command
          watch free -h

          showed that this VMs' zswap compression ratio was fluctuating between 4.8 and 6.1

          Consequently the decision was made to set

          '2GB RAM * 20% * 6.875'
          = 2.75 GB swap on this VM.


        • Use a swapfile for swap as this provides better flexibility than a swap partition.
          To resize an existing /swapfile to 2.75 GB (2883584 kilobytes):

          sudo swapoff /swapfile;
          # count in kilobytes
          sudo dd if=/dev/zero of=/swapfile bs=1024 count=2883584
          sudo chmod 0600 /swapfile
          sudo mkswap /swapfil
          swapon /swapfile
          swapon --show
          NAME      TYPE SIZE USED PRIO
          /swapfile file  12G 512K   -2
        • To make a new /swapfile persist after a reboot append this entry to /etc/fstab:

          /swapfile swap swap defaults 0 0
  • Setting cacheSizeGB appropriately when MongoDB table type is wiredtiger (the default)

    If your MongoDB installation is using table type wiredtiger, please read
    https://docs.mongodb.com/manual/reference/configuration-options/#storage.wiredTiger.engineConfig.cacheSizeGB

    • Consider setting cacheSizeGB appropriately taking into consideration the memory requirements of other processes running on this box.
      With default mongod settings, a box with 16GB memory will most probably be currently set to use more than 7GB of memory for cacheSizeGB.

      This may be too much considering the box may be running NMIS, OMK and other applications too.
      • Check how much memory mongod is using for this cache at /var/log/mongodb/mongod.log:  search for cache_size where it is provided in MB, for example:

        • STORAGE [initandlisten] wiredtiger_open config: create,cache_size=7414M
          cacheSizeGB is set in GB, so a cache_size of 7414M is equivalent to the following setting in /etc/mongod.conf:
          • storage:
            • wiredTiger:
              • engineConfig:
                • cacheSizeGB: 7.24


      • Example where 60% was decided as the ratio that should be used to compute MongoDB cacheSizeGB on a VM with 8GB memory

        • MONGO_USAGE=0.6
        • TOTAL_MEM=$(cat /proc/meminfo| grep -iF "memtotal" | awk '{print $2}')
          • returns TOTAL_MEM=7673702.4 (KB)
        • MONGO_MEM=$(echo "(${TOTAL_MEM} / (1024 * 1024)) * ${MONGO_USAGE}" | bc -l)
          • returns MONGO_MEM=4.390927734375 (GB)
        • MONGO_MEM=$(printf "%.2f" "${MONGO_MEM}")
          • returns  MONGO_MEM=4.39 (GB)
        • CACHE_MEM_FINAL=$(echo "((${MONGO_MEM} -1) * 0.5) / 1" | bc -l)
          • returns CACHE_MEM_FINAL=1.695 (GB)
        • CACHE_MEM_FINAL=$(printf "%.2f" "${CACHE_MEM_FINAL}")
          • returns CACHE_MEM_FINAL=1.70 (GB)
        • https://docs.mongodb.com/manual/reference/configuration-options/#storage-options
          IF CACHE_MEM_FINAL < 0.25 (GB) THEN CACHE_MEM_FINAL = 0.25 (GB)
        • echo $CACHE_MEM_FINAL
          • 1.7
        • Set cacheSizeGB to 1.7


Related Topics

  • No labels