* [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
@ 2016-05-30 9:56 Dong ZHu
2016-05-30 11:55 ` Cyril Hrubis
0 siblings, 1 reply; 5+ messages in thread
From: Dong ZHu @ 2016-05-30 9:56 UTC (permalink / raw)
To: ltp
From 98db5da4ec362e4f8e64b119a8d32245ca06bf36 Mon Sep 17 00:00:00 2001
From: Dong Zhu <bluezhudong@gmail.com>
Date: Mon, 30 May 2016 17:40:05 +0800
Subject: [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
Without this patch always get the below error with kernel 4.6.0:
"ltpapicmd.c:193: Test #1: NUMA hit and localnode increase in node0 is
less than expected"
numastat's "local_node" might take some to update even though we malloc
the local memory for each node.
So this patch in targets to sleep for 2 seconds until "local_node" get
updated.
Signed-off-by: Dong Zhu <bluezhudong@gmail.com>
---
testcases/kernel/numa/numa01.sh | 1 +
1 file changed, 1 insertion(+)
diff --git a/testcases/kernel/numa/numa01.sh b/testcases/kernel/numa/numa01.sh
index 9c5f49a..d3c7606 100755
--- a/testcases/kernel/numa/numa01.sh
+++ b/testcases/kernel/numa/numa01.sh
@@ -247,6 +247,7 @@ test01()
extract_numastat local_node $local_node $col || return 1
Prev_value=$RC
numactl --cpunodebind=$node --membind=$node support_numa $ALLOC_1MB
+ sleep 2s
numastat > $LTPTMP/numalog
extract_numastat local_node $local_node $col || return 1
Curr_value=$RC
--
2.1.0
--
Best Regards,
Dong Zhu
My Site: http://bluezd.info
^ permalink raw reply related [flat|nested] 5+ messages in thread
* [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
2016-05-30 9:56 [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating Dong ZHu
@ 2016-05-30 11:55 ` Cyril Hrubis
2016-05-31 1:08 ` Dong ZHu
0 siblings, 1 reply; 5+ messages in thread
From: Cyril Hrubis @ 2016-05-30 11:55 UTC (permalink / raw)
To: ltp
Hi!
> Without this patch always get the below error with kernel 4.6.0:
> "ltpapicmd.c:193: Test #1: NUMA hit and localnode increase in node0 is
> less than expected"
>
> numastat's "local_node" might take some to update even though we malloc
> the local memory for each node.
>
> So this patch in targets to sleep for 2 seconds until "local_node" get
> updated.
I've seen this failure as well and also the test 3, test 5 and test 6
two of them have sleep 2s already, hence adding sleep 2 is not good
enough it only makes the race condition less probable.
What we should do instead is to get the new stastistic in loop with a
short (~0.1s) sleep util we get the expected value or until timeout has
been reached (~10s). And this should be done ideally as a function
called from each of the numa tests.
--
Cyril Hrubis
chrubis@suse.cz
^ permalink raw reply [flat|nested] 5+ messages in thread
* [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
2016-05-30 11:55 ` Cyril Hrubis
@ 2016-05-31 1:08 ` Dong ZHu
2016-05-31 16:45 ` Cyril Hrubis
0 siblings, 1 reply; 5+ messages in thread
From: Dong ZHu @ 2016-05-31 1:08 UTC (permalink / raw)
To: ltp
On Mon, May 30, 2016 at 7:55 PM, Cyril Hrubis <chrubis@suse.cz> wrote:
> Hi!
>> Without this patch always get the below error with kernel 4.6.0:
>> "ltpapicmd.c:193: Test #1: NUMA hit and localnode increase in node0 is
>> less than expected"
>>
>> numastat's "local_node" might take some to update even though we malloc
>> the local memory for each node.
>>
>> So this patch in targets to sleep for 2 seconds until "local_node" get
>> updated.
>
> I've seen this failure as well and also the test 3, test 5 and test 6
> two of them have sleep 2s already, hence adding sleep 2 is not good
> enough it only makes the race condition less probable.
I didn't see this failure in test 3, test 4 and test 6.
When I add sleep 2 seconds test 1 work properly.
>
> What we should do instead is to get the new stastistic in loop with a
> short (~0.1s) sleep util we get the expected value or until timeout has
> been reached (~10s). And this should be done ideally as a function
> called from each of the numa tests.
>
I think we should not wait for 10s, it is a bit longer, the test will
pass no matter we
add "numactl --cpunodebind=$node --membind=$node support_numa
$ALLOC_1MB" or not.
I suppose maybe we could refine the support_numa to make it allocate
a bigger memory and run
for a while rather than return immediately.
--
Best Regards,
Dong Zhu
My Site: http://bluezd.info
^ permalink raw reply [flat|nested] 5+ messages in thread
* [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
2016-05-31 1:08 ` Dong ZHu
@ 2016-05-31 16:45 ` Cyril Hrubis
2016-07-05 1:22 ` Han Pingtian
0 siblings, 1 reply; 5+ messages in thread
From: Cyril Hrubis @ 2016-05-31 16:45 UTC (permalink / raw)
To: ltp
Hi!
> > What we should do instead is to get the new stastistic in loop with a
> > short (~0.1s) sleep util we get the expected value or until timeout has
> > been reached (~10s). And this should be done ideally as a function
> > called from each of the numa tests.
> >
>
> I think we should not wait for 10s, it is a bit longer, the test will
> pass no matter we
> add "numactl --cpunodebind=$node --membind=$node support_numa
> $ALLOC_1MB" or not.
Hmm, we are looking at accumulated statistics. So some of the tests will
just pass as other processes allocate memory as well.
> I suppose maybe we could refine the support_numa to make it allocate
> a bigger memory and run for a while rather than return immediately.
I wonder if there is a file in /proc/ that could be used to determine on
which numa node is the page allocated on. There is /proc/$pid/pagemap
that could be used to map address to someting that could used to seek in
/proc/kpageflags that can be used to get quite a lot of information
about the page state but not on which numa node it is allocated on...
--
Cyril Hrubis
chrubis@suse.cz
^ permalink raw reply [flat|nested] 5+ messages in thread
* [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating
2016-05-31 16:45 ` Cyril Hrubis
@ 2016-07-05 1:22 ` Han Pingtian
0 siblings, 0 replies; 5+ messages in thread
From: Han Pingtian @ 2016-07-05 1:22 UTC (permalink / raw)
To: ltp
On Tue, May 31, 2016 at 06:45:05PM +0200, Cyril Hrubis wrote:
> Hi!
> > > What we should do instead is to get the new stastistic in loop with a
> > > short (~0.1s) sleep util we get the expected value or until timeout has
> > > been reached (~10s). And this should be done ideally as a function
> > > called from each of the numa tests.
> > >
> >
> > I think we should not wait for 10s, it is a bit longer, the test will
> > pass no matter we
> > add "numactl --cpunodebind=$node --membind=$node support_numa
> > $ALLOC_1MB" or not.
>
> Hmm, we are looking at accumulated statistics. So some of the tests will
> just pass as other processes allocate memory as well.
>
> > I suppose maybe we could refine the support_numa to make it allocate
> > a bigger memory and run for a while rather than return immediately.
>
> I wonder if there is a file in /proc/ that could be used to determine on
> which numa node is the page allocated on. There is /proc/$pid/pagemap
> that could be used to map address to someting that could used to seek in
> /proc/kpageflags that can be used to get quite a lot of information
> about the page state but not on which numa node it is allocated on...
>
On two systems with two different beta distros, test01 always fail even
after adding 'sleep 2s'. Looks like there is no guarantee when naumstat
will be updated at all.
One distro uses a 4.4.13 kernel, the other uses a 4.4.0 kernel.
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2016-07-05 1:22 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz follow: Atom feed
-- links below jump to the message on this page --
2016-05-30 9:56 [LTP] [PATCH] kernel/numa: sleep 2s in test01 for waiting numastat updating Dong ZHu
2016-05-30 11:55 ` Cyril Hrubis
2016-05-31 1:08 ` Dong ZHu
2016-05-31 16:45 ` Cyril Hrubis
2016-07-05 1:22 ` Han Pingtian
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox