public inbox for ltp@lists.linux.it
 help / color / mirror / Atom feed
From: Jan Stancek <jstancek@redhat.com>
To: ltp@lists.linux.it
Subject: [LTP] [RFC] [PATCH] move_pages12: Allocate and free hugepages prior	the test
Date: Thu, 11 May 2017 02:40:04 -0400 (EDT)	[thread overview]
Message-ID: <57874670.10114549.1494484804573.JavaMail.zimbra@redhat.com> (raw)
In-Reply-To: <20170510150807.GF29838@rei.suse.de>



----- Original Message -----
> Hi!
> > > I've got a hint from our kernel devs that the problem may be that the
> > > per-node hugepage pool limits are set too low and increasing these
> > > seems to fix the issue for me. Apparently the /proc/sys/vm/nr_hugepages
> > > is global limit while the per-node limits are in sysfs.
> > > 
> > > Try increasing:
> > > 
> > > /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages
> > 
> > I'm not sure how that explains why it fails mid-test and not immediately
> > after start. It reminds me of sporadic hugetlbfs testsuite failures
> > in "counters" testcase.
> 
> Probably some kind of lazy update / deffered freeing that still accounts
> for freshly removed pages.

That was my impression as well.

> 
> > diff --git a/testcases/kernel/syscalls/move_pages/move_pages12.c
> > b/testcases/kernel/syscalls/move_pages/move_pages12.c
> > index 443b0c6..fe8384f 100644
> > --- a/testcases/kernel/syscalls/move_pages/move_pages12.c
> > +++ b/testcases/kernel/syscalls/move_pages/move_pages12.c
> > @@ -84,6 +84,12 @@ static void do_child(void)
> >                         pages, nodes, status, MPOL_MF_MOVE_ALL));
> >                 if (TEST_RETURN) {
> >                         tst_res(TFAIL | TTERRNO, "move_pages failed");
> > +                       system("cat
> > /sys/devices/system/node/node*/hugepages/hugepages-2048kB/nr_hugepages");
> > +                       system("cat
> > /sys/devices/system/node/node*/hugepages/hugepages-2048kB/free_hugepages");
> >                         break;
> >                 }
> >         }
> 
> Well that is a few forks away after the failure, if the race window is
> small enough we will never see the real value but maybe doing open() and
> read() directly would show us different values.

For free/reserved, sure. But is the number of reserved huge pages on
each node going to change over time?

---

I was running with 20+20 huge pages over night and it hasn't failed
single time. So I'm thinking we allocate 3+3 or 4+4 to avoid any
issues related to lazy/deffered updates.

Regards,
Jan

  reply	other threads:[~2017-05-11  6:40 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-05-09 14:04 [LTP] [RFC] [PATCH] move_pages12: Allocate and free hugepages prior the test Cyril Hrubis
2017-05-10  8:56 ` Jan Stancek
2017-05-10 12:21   ` Cyril Hrubis
2017-05-10 13:01     ` Jan Stancek
2017-05-10 13:49   ` Cyril Hrubis
2017-05-10 14:14     ` Jan Stancek
2017-05-10 15:08       ` Cyril Hrubis
2017-05-11  6:40         ` Jan Stancek [this message]
2017-05-11 12:26           ` Cyril Hrubis
2017-05-11 12:50             ` Jan Stancek
2017-05-16  9:30               ` Cyril Hrubis

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57874670.10114549.1494484804573.JavaMail.zimbra@redhat.com \
    --to=jstancek@redhat.com \
    --cc=ltp@lists.linux.it \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox