From: Waiman Long <longman@redhat.com>
To: "Johannes Weiner" <hannes@cmpxchg.org>,
"Michal Hocko" <mhocko@kernel.org>,
"Roman Gushchin" <roman.gushchin@linux.dev>,
"Shakeel Butt" <shakeel.butt@linux.dev>,
"Muchun Song" <muchun.song@linux.dev>,
"Andrew Morton" <akpm@linux-foundation.org>,
"Tejun Heo" <tj@kernel.org>, "Michal Koutný" <mkoutny@suse.com>,
"Shuah Khan" <shuah@kernel.org>,
"Mike Rapoport" <rppt@kernel.org>
Cc: linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, linux-kselftest@vger.kernel.org,
Sean Christopherson <seanjc@google.com>,
James Houghton <jthoughton@google.com>,
Sebastian Chlad <sebastianchlad@gmail.com>,
Guopeng Zhang <zhangguopeng@kylinos.cn>,
Li Wang <liwan@redhat.com>, Waiman Long <longman@redhat.com>,
Li Wang <liwang@redhat.com>
Subject: [PATCH v2 3/7] selftests: memcg: Iterate pages based on the actual page size
Date: Fri, 20 Mar 2026 16:42:37 -0400 [thread overview]
Message-ID: <20260320204241.1613861-4-longman@redhat.com> (raw)
In-Reply-To: <20260320204241.1613861-1-longman@redhat.com>
The current test_memcontrol test fault in memory by write a value
to the start of a page based on the default value of 4k page size.
Micro-optimize it by using the actual system page size to do the
iteration.
Reviewed-by: Li Wang <liwang@redhat.com>
Signed-off-by: Waiman Long <longman@redhat.com>
---
tools/testing/selftests/cgroup/test_memcontrol.c | 13 +++++++++----
1 file changed, 9 insertions(+), 4 deletions(-)
diff --git a/tools/testing/selftests/cgroup/test_memcontrol.c b/tools/testing/selftests/cgroup/test_memcontrol.c
index a25eb097b31c..babbfad10aaf 100644
--- a/tools/testing/selftests/cgroup/test_memcontrol.c
+++ b/tools/testing/selftests/cgroup/test_memcontrol.c
@@ -25,6 +25,7 @@
static bool has_localevents;
static bool has_recursiveprot;
+static int page_size;
int get_temp_fd(void)
{
@@ -60,7 +61,7 @@ int alloc_anon(const char *cgroup, void *arg)
char *buf, *ptr;
buf = malloc(size);
- for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ for (ptr = buf; ptr < buf + size; ptr += page_size)
*ptr = 0;
free(buf);
@@ -183,7 +184,7 @@ static int alloc_anon_50M_check(const char *cgroup, void *arg)
return -1;
}
- for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ for (ptr = buf; ptr < buf + size; ptr += page_size)
*ptr = 0;
current = cg_read_long(cgroup, "memory.current");
@@ -413,7 +414,7 @@ static int alloc_anon_noexit(const char *cgroup, void *arg)
return -1;
}
- for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ for (ptr = buf; ptr < buf + size; ptr += page_size)
*ptr = 0;
while (getppid() == ppid)
@@ -999,7 +1000,7 @@ static int alloc_anon_50M_check_swap(const char *cgroup, void *arg)
return -1;
}
- for (ptr = buf; ptr < buf + size; ptr += PAGE_SIZE)
+ for (ptr = buf; ptr < buf + size; ptr += page_size)
*ptr = 0;
mem_current = cg_read_long(cgroup, "memory.current");
@@ -1679,6 +1680,10 @@ int main(int argc, char **argv)
char root[PATH_MAX];
int i, proc_status;
+ page_size = sysconf(_SC_PAGE_SIZE);
+ if (page_size <= 0)
+ page_size = PAGE_SIZE;
+
ksft_print_header();
ksft_set_plan(ARRAY_SIZE(tests));
if (cg_find_unified_root(root, sizeof(root), NULL))
--
2.53.0
next prev parent reply other threads:[~2026-03-20 20:43 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2026-03-20 20:42 [PATCH v2 0/7] selftests: memcg: Fix test_memcontrol test failures with large page sizes Waiman Long
2026-03-20 20:42 ` [PATCH v2 1/7] memcg: Scale up vmstats flush threshold with int_sqrt(nr_cpus+2) Waiman Long
2026-03-23 12:46 ` Li Wang
2026-03-24 0:15 ` Yosry Ahmed
2026-03-25 16:47 ` Waiman Long
2026-03-25 17:23 ` Yosry Ahmed
2026-03-20 20:42 ` [PATCH v2 2/7] memcg: Scale down MEMCG_CHARGE_BATCH with increase in PAGE_SIZE Waiman Long
2026-03-23 12:47 ` Li Wang
2026-03-24 0:17 ` Yosry Ahmed
2026-03-20 20:42 ` Waiman Long [this message]
2026-03-23 2:53 ` [PATCH v2 3/7] selftests: memcg: Iterate pages based on the actual page size Li Wang
2026-03-23 2:56 ` Li Wang
2026-03-25 3:33 ` Waiman Long
2026-03-20 20:42 ` [PATCH v2 4/7] selftests: memcg: Increase error tolerance in accordance with " Waiman Long
2026-03-23 8:01 ` Li Wang
2026-03-25 16:42 ` Waiman Long
2026-03-20 20:42 ` [PATCH v2 5/7] selftests: memcg: Reduce the expected swap.peak with larger " Waiman Long
2026-03-23 8:24 ` Li Wang
2026-03-25 3:47 ` Waiman Long
2026-03-20 20:42 ` [PATCH v2 6/7] selftests: memcg: Don't call reclaim_until() if already in target Waiman Long
2026-03-23 8:53 ` Li Wang
2026-03-20 20:42 ` [PATCH v2 7/7] selftests: memcg: Treat failure for zeroing sock in test_memcg_sock as XFAIL Waiman Long
2026-03-23 9:44 ` Li Wang
2026-03-21 1:16 ` [PATCH v2 0/7] selftests: memcg: Fix test_memcontrol test failures with large page sizes Andrew Morton
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20260320204241.1613861-4-longman@redhat.com \
--to=longman@redhat.com \
--cc=akpm@linux-foundation.org \
--cc=cgroups@vger.kernel.org \
--cc=hannes@cmpxchg.org \
--cc=jthoughton@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-kselftest@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=liwan@redhat.com \
--cc=liwang@redhat.com \
--cc=mhocko@kernel.org \
--cc=mkoutny@suse.com \
--cc=muchun.song@linux.dev \
--cc=roman.gushchin@linux.dev \
--cc=rppt@kernel.org \
--cc=seanjc@google.com \
--cc=sebastianchlad@gmail.com \
--cc=shakeel.butt@linux.dev \
--cc=shuah@kernel.org \
--cc=tj@kernel.org \
--cc=zhangguopeng@kylinos.cn \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox