即使在内存压力下,如何将可执行代码保留在内存中?在 Linux

How to keep executable code in memory even under memory pressure ? in Linux

这里的目标是在 Linux.
内存压力期间将每个 运行ning 进程的可执行代码保留在内存中 在 Linux 中,我能够立即(1 秒)造成高内存压力并触发 OOM 杀手 stress --vm-bytes $(awk '/MemAvailable/{printf "%d\n", + 4000;}' < /proc/meminfo)k --vm-keep -m 4 --timeout 10s(代码来自 here) Qubes OS R4.0 Fedora 28 AppVM 内最大内存为 24000MB。 EDIT4: 也许相关,但我忘了提及,我没有启用交换功能(即未设置 CONFIG_SWAP

dmesg 报告:

[  867.746593] Mem-Info:
[  867.746607] active_anon:1390927 inactive_anon:4670 isolated_anon:0
                active_file:94 inactive_file:72 isolated_file:0
                unevictable:13868 dirty:0 writeback:0 unstable:0
                slab_reclaimable:5906 slab_unreclaimable:12919
                mapped:1335 shmem:4805 pagetables:5126 bounce:0
                free:40680 free_pcp:978 free_cma:0

有趣的部分是 active_file:94 inactive_file:72 它们以千字节为单位并且非常低。

这里的问题是,在那段内存压力期间,可执行代码正在从磁盘重新读取,导致磁盘抖动,从而导致 frozen OS。 (但在上述情况下,它只发生不到 1 秒)

我在内核中看到一段有趣的代码 mm/vmscan.c:

        if (page_referenced(page, 0, sc->target_mem_cgroup,
                            &vm_flags)) {
                nr_rotated += hpage_nr_pages(page);
                /*
                 * Identify referenced, file-backed active pages and
                 * give them one more trip around the active list. So
                 * that executable code get better chances to stay in
                 * memory under moderate memory pressure.  Anon pages
                 * are not likely to be evicted by use-once streaming
                 * IO, plus JVM can create lots of anon VM_EXEC pages,
                 * so we ignore them here.
                 */
                if ((vm_flags & VM_EXEC) && page_is_file_cache(page)) {
                        list_add(&page->lru, &l_active);
                        continue;
                }
        }

我认为,如果有人可以指出如何更改它,而不是 give them one more trip around the active list,我们将它变成 give them infinite trips around the active list,那么工作应该完成了。或者也许还有其他方法?

我可以修补和测试自定义内核。我只是不知道如何更改代码以始终将活动的可执行代码保留在内存中(我相信这实际上可以避免磁盘抖动)。

编辑: 这是我目前所做的工作(应用于内核 4.18.5 之上):

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 32699b2..7636498 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -208,7 +208,7 @@ enum lru_list {

 #define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)

-#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
+#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_INACTIVE_FILE; lru++)

 static inline int is_file_lru(enum lru_list lru)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 03822f8..1f3ffb5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2234,7 +2234,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,

    anon  = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) +
        lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES);
-   file  = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
+   file  = //lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
        lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);

    spin_lock_irq(&pgdat->lru_lock);
@@ -2345,7 +2345,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
             sc->priority == DEF_PRIORITY);

    blk_start_plug(&plug);
-   while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
+   while (nr[LRU_INACTIVE_ANON] || //nr[LRU_ACTIVE_FILE] ||
                    nr[LRU_INACTIVE_FILE]) {
        unsigned long nr_anon, nr_file, percentage;
        unsigned long nr_scanned;
@@ -2372,7 +2372,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
         * stop reclaiming one LRU and reduce the amount scanning
         * proportional to the original scan target.
         */
-       nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
+       nr_file = nr[LRU_INACTIVE_FILE] //+ nr[LRU_ACTIVE_FILE]
+           ;
        nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];

        /*
@@ -2391,7 +2392,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
            percentage = nr_anon * 100 / scan_target;
        } else {
            unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
-                       targets[LRU_ACTIVE_FILE] + 1;
+                       //targets[LRU_ACTIVE_FILE] + 
+                       1;
            lru = LRU_FILE;
            percentage = nr_file * 100 / scan_target;
        }

也看过here on github because in the above code, tabs got transformed into spaces! (mirror1, mirror2)
我已经测试了上面的补丁(现在最大 4000MB RAM,是的,比以前少了 20G!)即使使用 Firefox 编译,已知磁盘将 OS 打入永久冻结,并且它不再发生( oom-killer 几乎立即杀死了有问题的进程),也使用上面的 stress 命令,现在产生:

[  745.830511] Mem-Info:
[  745.830521] active_anon:855546 inactive_anon:20453 isolated_anon:0
                active_file:26925 inactive_file:76 isolated_file:0
                unevictable:10652 dirty:0 writeback:0 unstable:0
                slab_reclaimable:26975 slab_unreclaimable:13525
                mapped:24238 shmem:20456 pagetables:4028 bounce:0
                free:14935 free_pcp:177 free_cma:0

那是 active_file:26925 inactive_file:76,将近 27 兆的活动文件...
所以,我不知道这有多好。我是否在内存中保留所有活动文件而不仅仅是可执行文件?在 firefox 编译期间,我有 500meg 的 Active(file)(EDIT2: 但这是根据: cat /proc/meminfo|grep -F -- 'Active(file)' 显示与上述 active_file: 不同的值来自 dmesg!!!) 这让我怀疑它只是 exes/libs...
也许有人可以建议如何只保留可执行代码?(如果这不是已经发生的事情)
想法?

EDIT3: 有了上面的补丁,似乎有必要(定期?)运行 sudo sysctl vm.drop_caches=1 释放一些陈旧的内存(?),因此,如果我在 firefox 编译后调用 stress,我会得到:active_file:142281 inactive_file:0 isolated_file:0 (142megs) 然后删除文件缓存(另一种方式:echo 1|sudo tee /proc/sys/vm/drop_caches)然后 运行 stress再次,我得到: active_file:22233 inactive_file:160 isolated_file:0 (22megs) - 我不确定...

没有上述补丁的结果:here
使用上述补丁的结果:here

警告: 如果启用了交换,请不要使用此补丁,因为两个用户 reported 效果更差。我只在内核中禁用交换的情况下测试了这个补丁! (即。CONFIG_SWAP 未设置)

直到另行通知(或有人想出更好的东西),我正在使用(对我来说它有效)以下 patch 以避免任何磁盘抖动/ OS 冻结当即将 运行 内存不足时,OOM 杀手会尽快触发(最多 1 秒):

revision 3
preliminary patch to avoid disk thrashing (constant reading) under memory pressure before OOM-killer triggers
more info: https://gist.github.com/constantoverride/84eba764f487049ed642eb2111a20830

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 32699b2..7636498 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -208,7 +208,7 @@ enum lru_list {

 #define for_each_lru(lru) for (lru = 0; lru < NR_LRU_LISTS; lru++)

-#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_ACTIVE_FILE; lru++)
+#define for_each_evictable_lru(lru) for (lru = 0; lru <= LRU_INACTIVE_FILE; lru++)

 static inline int is_file_lru(enum lru_list lru)
 {
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 03822f8..1f3ffb5 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2086,9 +2086,9 @@ static unsigned long shrink_list(enum lr
                 struct scan_control *sc)
 {
    if (is_active_lru(lru)) {
-       if (inactive_list_is_low(lruvec, is_file_lru(lru),
-                    memcg, sc, true))
-           shrink_active_list(nr_to_scan, lruvec, sc, lru);
+       //if (inactive_list_is_low(lruvec, is_file_lru(lru),
+       //           memcg, sc, true))
+       //  shrink_active_list(nr_to_scan, lruvec, sc, lru);
        return 0;
    }

@@ -2234,7 +2234,7 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,

    anon  = lruvec_lru_size(lruvec, LRU_ACTIVE_ANON, MAX_NR_ZONES) +
        lruvec_lru_size(lruvec, LRU_INACTIVE_ANON, MAX_NR_ZONES);
-   file  = lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
+   file  = //lruvec_lru_size(lruvec, LRU_ACTIVE_FILE, MAX_NR_ZONES) +
        lruvec_lru_size(lruvec, LRU_INACTIVE_FILE, MAX_NR_ZONES);

    spin_lock_irq(&pgdat->lru_lock);
@@ -2345,7 +2345,7 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
             sc->priority == DEF_PRIORITY);

    blk_start_plug(&plug);
-   while (nr[LRU_INACTIVE_ANON] || nr[LRU_ACTIVE_FILE] ||
+   while (nr[LRU_INACTIVE_ANON] || //nr[LRU_ACTIVE_FILE] ||
                    nr[LRU_INACTIVE_FILE]) {
        unsigned long nr_anon, nr_file, percentage;
        unsigned long nr_scanned;
@@ -2372,7 +2372,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
         * stop reclaiming one LRU and reduce the amount scanning
         * proportional to the original scan target.
         */
-       nr_file = nr[LRU_INACTIVE_FILE] + nr[LRU_ACTIVE_FILE];
+       nr_file = nr[LRU_INACTIVE_FILE] //+ nr[LRU_ACTIVE_FILE]
+           ;
        nr_anon = nr[LRU_INACTIVE_ANON] + nr[LRU_ACTIVE_ANON];

        /*
@@ -2391,7 +2392,8 @@ static void shrink_node_memcg(struct pglist_data *pgdat, struct mem_cgroup *memc
            percentage = nr_anon * 100 / scan_target;
        } else {
            unsigned long scan_target = targets[LRU_INACTIVE_FILE] +
-                       targets[LRU_ACTIVE_FILE] + 1;
+                       //targets[LRU_ACTIVE_FILE] + 
+                       1;
            lru = LRU_FILE;
            percentage = nr_file * 100 / scan_target;
        }
@@ -2409,10 +2411,12 @@ static void shrink_node_memcg(struct pgl
        nr[lru] = targets[lru] * (100 - percentage) / 100;
        nr[lru] -= min(nr[lru], nr_scanned);

+       if (LRU_FILE != lru) { //avoid this block for LRU_ACTIVE_FILE
        lru += LRU_ACTIVE;
        nr_scanned = targets[lru] - nr[lru];
        nr[lru] = targets[lru] * (100 - percentage) / 100;
        nr[lru] -= min(nr[lru], nr_scanned);
+       }

        scan_adjusted = true;
    }

不幸的是,上面将制表符转换为空格,所以如果你想要原始补丁,它是 here

这个补丁所做的是在内存压力下不会驱逐 Active(file) 页,因此不会导致 kswapd0(但在 iotop 中看到每个程序本身)重新读取每个 运行ning 进程的可执行页面每次都有一个 context switch 以允许程序(继续)运行。因此,避免了大量的磁盘抖动,并且 OS 不会冻结成爬行。

以上是在 Qubes OS 4.0 的 dom0(Fedora 25)和我正在使用的所有虚拟机(Fedora 28)中使用内核 4.18.5(现在测试 4.18.7)进行测试的。

对于此补丁的 first version,它也同样有效(显然),请参阅 EDIT 关于这个问题的答案。

更新: 在具有 16G RAM(减去集成显卡保留的 512M)且没有交换(内核中也禁用)的 ArchLinux 笔记本电脑上使用此补丁一段时间后,我可以说系统可以 运行 比没有 le9d.patch (修订版 3)更快地内存不足,因此 OOM-killer 会触发 Xorg 或 chromium 或其他没有补丁的情况.因此,作为一种缓解措施,到目前为止,这似乎对我有用,只要 /proc/meminfo 中的 Active(file) 数字超过 2G 也就是 2000000 KB,我就会 运行ning echo 1 > /proc/sys/vm/drop_caches (例如,通过此代码获取 KB 数:grep 'Active(file):' /proc/meminfo|tr -d ' '|cut -f2 -d:|sed 's/kB//'),然后使用 sleep 5 进行此检查。但最近为了在 /tmp 中编译 firefox-hg,它是 tmpfs 并且最终使用 12G 并确保它不会被 OOM 杀死,我一直在使用 500000 而不是 2000000 KB。这肯定比冻结整个系统(即没有 le9d.patch 时)要好,后者会发生在这个 firefox 编译案例中。如果没有这个检查,Active(file) 不会超过 4G,但是如果需要更多内存,这足以 OOM-kill Xorg,例如在这个 firefox 编译案例中,甚至是通过午夜指挥官复制许多千兆字节(如果我记得的话)这是正确的)。

cgroups-v2 内存控制器中的 memory.min 参数应该有所帮助。

即,让我引用:

"Hard memory protection. If the memory usage of a cgroup is within its effective min boundary, the cgroup’s memory won’t be reclaimed under any conditions. If there is no unprotected reclaimable memory available, OOM killer is invoked."

https://www.kernel.org/doc/html/latest/admin-guide/cgroup-v2.html

为了回答这个问题,这里有一个 simple/preliminary 补丁,如果它小于 256 MiB,则不会驱逐 Active(file)(如 /proc/meminfo 中所示),这似乎工作正常(否磁盘抖动)与 linux-stable 5.2.4:

diff --git a/mm/vmscan.c b/mm/vmscan.c
index dbdc46a84f63..7a0b7e32ff45 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -2445,6 +2445,13 @@ static void get_scan_count(struct lruvec *lruvec, struct mem_cgroup *memcg,
            BUG();
        }

+    if (NR_ACTIVE_FILE == lru) {
+      long long kib_active_file_now=global_node_page_state(NR_ACTIVE_FILE) * MAX_NR_ZONES;
+      if (kib_active_file_now <= 256*1024) {
+        nr[lru] = 0; //don't reclaim any Active(file) (see /proc/meminfo) if they are under 256MiB
+        continue;
+      }
+    }
        *lru_pages += size;
        nr[lru] = scan;
    }

注意一些尚未发现 regression on kernel 5.3.0-rc4-gd45331b00ddb即使没有这个补丁也会导致系统冻结(没有磁盘抖动,sysrq 仍然可以工作) .

(任何与此相关的新发展都应该发生 here.