public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Dave Hansen <dave.hansen@intel.com>
To: kernel test robot <oliver.sang@intel.com>,
	Thomas Gleixner <tglx@linutronix.de>
Cc: Borislav Petkov <bp@suse.de>,
	"Chang S. Bae" <chang.seok.bae@intel.com>,
	LKML <linux-kernel@vger.kernel.org>,
	lkp@lists.01.org, lkp@intel.com, ying.huang@intel.com,
	feng.tang@intel.com, zhengjun.xing@linux.intel.com,
	fengwei.yin@intel.com
Subject: Re: [x86/signal] 3aac3ebea0: will-it-scale.per_thread_ops -11.9% regression
Date: Tue, 7 Dec 2021 15:14:38 -0800	[thread overview]
Message-ID: <bbc24579-b6ee-37cb-4bbf-10e3476537e0@intel.com> (raw)
In-Reply-To: <20211207012128.GA16074@xsang-OptiPlex-9020>

On 12/6/21 5:21 PM, kernel test robot wrote:
> 
> 1bdda24c4af64cd2 3aac3ebea08f2d342364f827c89 
> ---------------- --------------------------- 
>          %stddev     %change         %stddev
>              \          |                \  
>     980404 ±  3%     -10.2%     880436 ±  2%  will-it-scale.16.threads
>      61274 ±  3%     -10.2%      55027 ±  2%  will-it-scale.per_thread_ops
>     980404 ±  3%     -10.2%     880436 ±  2%  will-it-scale.workload
>    9745749 ± 18%     +26.8%   12356608 ±  4%  meminfo.DirectMap2M

Something else funky is going on here.  Why would there all of a sudden
be so many more 2M pages in the direct map?  I also see gunk like
interrupts on the network card going up.  I can certainly see that
happening if something else on the network was messing around.

Granted, this was seen across several systems, but it's really odd.  I
guess I'll go try to dig up one of the actual ones where this was seen.

I tried on a smaller Skylake system and I don't see any regression at
all or any interesting delta in a perf profile.

Oliver or Chang, could you try to reproduce this by hand on one of the
suspect systems?  Build:

  1bdda24c4a ("signal: Add an optional check for altstack size")

then run will-it-scale by hand.  Then build:

  3aac3ebea0 ("x86/signal: Implement sigaltstack size validation")

and run it again.  Also, do we see any higher core-count regressions?
These all seem to happen with:

	mode=thread
	nr_task=16

That's really odd to see that for these systems with probably ~50 cores
each.  I'd expect to see it get worse at higher core counts.

  parent reply	other threads:[~2021-12-07 23:14 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-12-07  1:21 [x86/signal] 3aac3ebea0: will-it-scale.per_thread_ops -11.9% regression kernel test robot
2021-12-07  1:44 ` Oliver Sang
2021-12-07 13:38 ` Thomas Gleixner
2021-12-07 18:49   ` Bae, Chang Seok
2021-12-07 20:36     ` Thomas Gleixner
2021-12-07 22:17       ` Bae, Chang Seok
2021-12-08  0:59         ` Yin Fengwei
2021-12-09  2:30   ` [LKP] " Carel Si
2021-12-07 23:14 ` Dave Hansen [this message]
2021-12-08 18:00   ` Bae, Chang Seok
2021-12-08 18:20     ` Dave Hansen
2021-12-08 19:14       ` Thomas Gleixner
2021-12-09  8:13       ` Thomas Gleixner
2021-12-10  4:15   ` [LKP] " Carel Si

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bbc24579-b6ee-37cb-4bbf-10e3476537e0@intel.com \
    --to=dave.hansen@intel.com \
    --cc=bp@suse.de \
    --cc=chang.seok.bae@intel.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=lkp@intel.com \
    --cc=lkp@lists.01.org \
    --cc=oliver.sang@intel.com \
    --cc=tglx@linutronix.de \
    --cc=ying.huang@intel.com \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox