From: Wei Gao via ltp <ltp@lists.linux.it>
To: Li Wang <liwang@redhat.com>
Cc: ltp@lists.linux.it
Subject: Re: [LTP] [PATCH v2] kill01: New case cgroup kill
Date: Mon, 6 Mar 2023 09:54:58 -0500 [thread overview]
Message-ID: <20230306145458.GA21526@localhost> (raw)
In-Reply-To: <CAEemH2cQSDkdULUbVAWb2mUKn8g1UJ4vUtVeJ_KdyfoBccJ-wQ@mail.gmail.com>
On Mon, Mar 06, 2023 at 06:16:26PM +0800, Li Wang wrote:
> On Sun, Mar 5, 2023 at 5:14 PM Wei Gao via ltp <ltp@lists.linux.it> wrote:
>
> > Signed-off-by: Wei Gao <wegao@suse.com>
> > ---
> > +#define pid_num 100
> >
>
> My concern about defining pid_num as a fixed variable is that
> the test may spend a long time on a single_cpu or slower system.
> A sanity way is probably to use a dynamical number according
> to the test bed available CPUs (e.g. tst_ncpus_available() + 1).
good idea!
>
>
>
> > +static struct tst_cg_group *cg_child_test_simple;
> > +
> > +
> > +static int wait_for_pid(pid_t pid)
> > +{
> > + int status, ret;
> > +
> > +again:
> > + ret = waitpid(pid, &status, 0);
> > + if (ret == -1) {
> > + if (errno == EINTR)
> > + goto again;
> > +
> > + return -1;
> > + }
> > +
> > + if (!WIFEXITED(status))
> > + return -1;
> > +
> > + return WEXITSTATUS(status);
> > +}
> > +
> > +/*
> > + * A simple process running in a sleep loop until being
> > + * re-parented.
> > + */
> > +static int child_fn(void)
> > +{
> > + int ppid = getppid();
> > +
> > + while (getppid() == ppid)
> > + usleep(1000);
> > +
> > + return getppid() == ppid;
> >
>
> why do we need to return the value of this comparison?
> I suppose most time the child does _not_ have a chance
> to get here.
yes, chance to reach here is small in our scenario, remove
the logic here.
>
>
>
> > +}
> > +
> > +static int cg_run_nowait(const struct tst_cg_group *const cg,
> > + int (*fn)(void))
> > +{
> > + int pid;
> > +
> > + pid = fork();
> >
>
> use SAFE_FORK() maybe better.
good catch!
>
>
>
> > + if (pid == 0) {
> > + SAFE_CG_PRINTF(cg, "cgroup.procs", "%d", getpid());
> > + exit(fn());
> > + }
> > +
> > + return pid;
> > +}
> > +
> > +static int cg_wait_for_proc_count(const struct tst_cg_group *cg, int
> > count)
> > +{
> > + char buf[20 * pid_num] = {0};
> > + int attempts;
> > + char *ptr;
> > +
> > + for (attempts = 10; attempts >= 0; attempts--) {
> > + int nr = 0;
> > +
> > + SAFE_CG_READ(cg, "cgroup.procs", buf, sizeof(buf));
> > +
> > + for (ptr = buf; *ptr; ptr++)
> > + if (*ptr == '\n')
> > + nr++;
> > +
> > + if (nr >= count)
> > + return 0;
> > +
> > + usleep(100000);
> >
>
> In this loop, there is only 1 second for waiting for children ready.
> So, if test on a slower/overload machine that is a bit longer than this
> time,
> what will happen? shouldn't we handle this as a corner case failure?
I will increase to 10 second, then if all the children processes can not ready in correct
cgroup we will take this as a failure case.
>
>
>
> > + }
> > +
> > + return -1;
> > +}
> > +
> > +static void run(void)
> > +{
> > +
> > + pid_t pids[100];
> > + int i;
> > +
> > + cg_child_test_simple = tst_cg_group_mk(tst_cg, "cg_test_simple");
> > +
> > + for (i = 0; i < pid_num; i++)
> > + pids[i] = cg_run_nowait(cg_child_test_simple, child_fn);
> > +
> > + TST_EXP_PASS(cg_wait_for_proc_count(cg_child_test_simple,
> > pid_num));
> > + SAFE_CG_PRINTF(cg_child_test_simple, "cgroup.kill", "%d", 1);
> > +
> > + for (i = 0; i < pid_num; i++) {
> > + /* wait_for_pid(pids[i]); */
> > + TST_EXP_PASS_SILENT(wait_for_pid(pids[i]) == SIGKILL);
> > + }
> > +
> > + cg_child_test_simple = tst_cg_group_rm(cg_child_test_simple);
> > +}
> > +
> > +static struct tst_test test = {
> > + .test_all = run,
> > + .forks_child = 1,
> > + .max_runtime = 5,
> > + .needs_cgroup_ctrls = (const char *const []){ "memory", NULL },
> > + .needs_cgroup_ver = TST_CG_V2,
> > +};
> > --
> > 2.35.3
> >
> >
> > --
> > Mailing list info: https://lists.linux.it/listinfo/ltp
> >
> >
>
> --
> Regards,
> Li Wang
--
Mailing list info: https://lists.linux.it/listinfo/ltp
next prev parent reply other threads:[~2023-03-06 14:55 UTC|newest]
Thread overview: 56+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-02-24 2:38 [LTP] [PATCH v1] kill01: New case cgroup kill Wei Gao via ltp
2023-02-24 10:12 ` Cyril Hrubis
2023-02-24 12:27 ` Wei Gao via ltp
2023-03-05 9:10 ` [LTP] [PATCH v2] " Wei Gao via ltp
2023-03-06 10:16 ` Li Wang
2023-03-06 14:54 ` Wei Gao via ltp [this message]
2023-03-06 15:13 ` [LTP] [PATCH v3] " Wei Gao via ltp
2023-03-06 23:57 ` [LTP] [PATCH v4] " Wei Gao via ltp
2023-03-07 7:13 ` Li Wang
2023-03-07 8:27 ` Wei Gao via ltp
2023-03-07 11:23 ` Li Wang
2023-03-07 8:51 ` [LTP] [PATCH v5] " Wei Gao via ltp
2023-03-07 11:37 ` Li Wang
2023-03-09 21:40 ` Petr Vorel
2023-03-15 12:23 ` Wei Gao via ltp
2023-03-13 10:45 ` Richard Palethorpe
2023-03-15 5:47 ` Li Wang
2023-03-15 12:55 ` Wei Gao via ltp
2023-03-16 11:10 ` Richard Palethorpe
2023-03-18 5:00 ` Wei Gao via ltp
2023-03-15 18:52 ` Petr Vorel
2023-03-18 4:52 ` [LTP] [PATCH v6] " Wei Gao via ltp
2023-03-29 6:28 ` Petr Vorel
2023-04-19 15:18 ` [LTP] [PATCH v7 0/2] " Wei Gao via ltp
2023-04-19 15:18 ` [LTP] [PATCH v7 1/2] " Wei Gao via ltp
2023-04-19 15:18 ` [LTP] [PATCH v7 2/2] tst_cgroup.c: Add a cgroup pseudo controller Wei Gao via ltp
2023-04-21 1:26 ` [LTP] [PATCH v8 0/2] kill01: New case cgroup kill Wei Gao via ltp
2023-04-21 1:26 ` [LTP] [PATCH v8 1/2] " Wei Gao via ltp
2023-04-21 6:35 ` Li Wang
2023-04-21 1:26 ` [LTP] [PATCH v8 2/2] tst_cgroup.c: Add a cgroup pseudo controller Wei Gao via ltp
2023-04-21 4:33 ` Li Wang
2023-04-21 10:58 ` Cyril Hrubis
2023-04-22 13:53 ` [LTP] [PATCH v9 0/2] kill01: New case cgroup kill Wei Gao via ltp
2023-04-22 13:53 ` [LTP] [PATCH v9 1/2] " Wei Gao via ltp
2023-04-26 13:11 ` Cyril Hrubis
2023-04-27 12:13 ` Shivani Samala
2023-04-27 12:18 ` Cyril Hrubis
2023-04-22 13:53 ` [LTP] [PATCH v9 2/2] tst_cgroup.c: Add a cgroup pseudo controller Wei Gao via ltp
2023-04-23 6:46 ` Li Wang
2023-04-26 13:12 ` Cyril Hrubis
2023-04-28 0:16 ` [LTP] [PATCH v10 0/2] kill01: New case cgroup kill Wei Gao via ltp
2023-04-28 0:17 ` [LTP] [PATCH v10 1/2] " Wei Gao via ltp
2023-04-28 8:04 ` Petr Vorel
2023-04-28 0:17 ` [LTP] [PATCH v10 2/2] tst_cgroup.c: Add a cgroup base controller Wei Gao via ltp
2023-04-28 7:59 ` Petr Vorel
2023-04-28 10:10 ` [LTP] [PATCH v11 0/2] New case test cgroup kill feature Wei Gao via ltp
2023-04-28 10:10 ` [LTP] [PATCH v11 1/2] tst_cgroup.c: Add a cgroup base controller Wei Gao via ltp
2023-04-28 10:10 ` [LTP] [PATCH v11 2/2] cgroup_core03.c: New case test cgroup kill feature Wei Gao via ltp
2023-04-30 7:48 ` [LTP] [PATCH v12 0/2] " Wei Gao via ltp
2023-04-30 7:48 ` [LTP] [PATCH v12 1/2] tst_cgroup.c: Add a cgroup base controller Wei Gao via ltp
2023-04-30 13:44 ` Li Wang
2023-04-30 23:39 ` Wei Gao via ltp
2023-05-02 6:56 ` Petr Vorel
2023-05-02 9:12 ` Petr Vorel
2023-04-30 7:48 ` [LTP] [PATCH v12 2/2] cgroup_core03.c: New case test cgroup kill feature Wei Gao via ltp
2023-04-30 13:44 ` Li Wang
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20230306145458.GA21526@localhost \
--to=ltp@lists.linux.it \
--cc=liwang@redhat.com \
--cc=wegao@suse.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox