public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Prarit Bhargava <prarit@redhat.com>
To: Waiman Long <Waiman.Long@hpe.com>,
	Thomas Gleixner <tglx@linutronix.de>,
	Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org,
	Borislav Petkov <bp@suse.de>, Andy Lutomirski <luto@kernel.org>,
	Dave Hansen <dave.hansen@intel.com>,
	Scott J Norton <scott.norton@hpe.com>,
	Douglas Hatch <doug.hatch@hpe.com>,
	Randy Wright <rwright@hpe.com>
Subject: Re: [RESEND PATCH v6] x86/hpet: Reduce HPET counter read contention
Date: Tue, 06 Sep 2016 11:33:27 -0400	[thread overview]
Message-ID: <57CEE1C7.40505@redhat.com> (raw)
In-Reply-To: <1473175676-27713-1-git-send-email-Waiman.Long@hpe.com>



On 09/06/2016 11:27 AM, Waiman Long wrote:
> On a large system with many CPUs, using HPET as the clock source can
> have a significant impact on the overall system performance because
> of the following reasons:
>  1) There is a single HPET counter shared by all the CPUs.
>  2) HPET counter reading is a very slow operation.
> 
> Using HPET as the default clock source may happen when, for example,
> the TSC clock calibration exceeds the allowable tolerance. Something
> the performance slowdown can be so severe that the system may crash
> because of a NMI watchdog soft lockup, for example.
> 
> During the TSC clock calibration process, the default clock source
> will be set temporarily to HPET. For systems with many CPUs, it is
> possible that NMI watchdog soft lockup may occur occasionally during
> that short time period where HPET clocking is active as is shown in
> the kernel log below:
> 
> [   71.618132] NetLabel: Initializing
> [   71.621967] NetLabel:  domain hash size = 128
> [   71.626848] NetLabel:  protocols = UNLABELED CIPSOv4
> [   71.632418] NetLabel:  unlabeled traffic allowed by default
> [   71.638679] hpet0: at MMIO 0xfed00000, IRQs 2, 8, 0, 0, 0, 0, 0, 0
> [   71.646504] hpet0: 8 comparators, 64-bit 14.318180 MHz counter
> [   71.655313] Switching to clocksource hpet
> [   95.679135] BUG: soft lockup - CPU#144 stuck for 23s! [swapper/144:0]
> [   95.693363] BUG: soft lockup - CPU#145 stuck for 23s! [swapper/145:0]
> [   95.694203] Modules linked in:
> [   95.694697] CPU: 145 PID: 0 Comm: swapper/145 Not tainted 3.10.0-327.el7.x86_64 #1
> [   95.695580] BUG: soft lockup - CPU#582 stuck for 23s! [swapper/582:0]
> [   95.696145] Hardware name: HP Superdome2 16s x86, BIOS Bundle: 008.001.006 SFW: 041.063.152 01/16/2016
> [   95.698128] BUG: soft lockup - CPU#357 stuck for 23s! [swapper/357:0]
> 
> This patch attempts to address the above issues by reducing HPET read
> contention using the fact that if more than one CPUs are trying to
> access HPET at the same time, it will be more efficient when only
> one CPU in the group reads the HPET counter and shares it with the
> rest of the group instead of each group member trying to read the
> HPET counter individually.
> 
> This is done by using a combination word with a sequence number and
> a bit lock. The CPU that gets the bit lock will be responsible for
> reading the HPET counter and update the sequence number. The others
> will monitor the change in sequence number and grab the HPET counter
> value accordingly. This change is only enabled on SMP configuration.
> 
> On a 4-socket Haswell-EX box with 144 threads (HT on), running the
> AIM7 compute workload (1500 users) on a 4.8-rc1 kernel (HZ=1000)
> with and without the patch has the following performance numbers
> (with HPET or TSC as clock source):
> 
> TSC		= 1042431 jobs/min
> HPET w/o patch	=  798068 jobs/min
> HPET with patch	= 1029445 jobs/min
> 
> The perf profile showed a reduction of the %CPU time consumed by
> read_hpet from 11.19% without patch to 1.24% with patch.
> 
> Signed-off-by: Waiman Long <Waiman.Long@hpe.com>

This resolves the boot-time problems on my systems.  I've also seen a
performance increase of about 5% with this patch when using the HPET.

Tested-by: Prarit Bhargava <prarit@redhat.com>

P.

  reply	other threads:[~2016-09-06 15:33 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-09-06 15:27 [RESEND PATCH v6] x86/hpet: Reduce HPET counter read contention Waiman Long
2016-09-06 15:33 ` Prarit Bhargava [this message]
2016-09-06 15:45 ` Waiman Long
2016-09-06 15:50 ` Thomas Gleixner
2016-09-06 17:10   ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=57CEE1C7.40505@redhat.com \
    --to=prarit@redhat.com \
    --cc=Waiman.Long@hpe.com \
    --cc=bp@suse.de \
    --cc=dave.hansen@intel.com \
    --cc=doug.hatch@hpe.com \
    --cc=hpa@zytor.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=luto@kernel.org \
    --cc=mingo@redhat.com \
    --cc=rwright@hpe.com \
    --cc=scott.norton@hpe.com \
    --cc=tglx@linutronix.de \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox