The one-liner:

dd if=/dev/zero bs=1G count=10 | gzip -c > 10GB.gz

This is brilliant.

  • DreamButt@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    ·
    17 hours ago

    No, but that’s an interesting question. Ultimately it probably comes down to hardware specs. Or depending on the particular bot and it’s env the spec of the container it’s running in

    Even with macos’s style of compressing inactive memory pages you’ll still have a hard cap that can be reached with the same technique (just with a larger uncompressed file)

    • 4am@lemm.ee
      link
      fedilink
      English
      arrow-up
      2
      ·
      13 hours ago

      How long would it take to be considered an inactive memory page? Does OOM conditions immediately trigger compression, or would the process die first?

      • DreamButt@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        37 minutes ago

        So I’m not an expert but my understanding is the flow is roughly:

        1. Available memory gets low
        2. Compress based on LRU rules
        3. Use swap
        4. OOM

        So it’s more meant to be preventative afaik