From: Prarit Bhargava <prarit@redhat.com>
To: john stultz <johnstul@us.ibm.com>,
Linux Kernel <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] [RFC] Potential fix for leapsecond caused futex related load spikes
Date: Sun, 01 Jul 2012 11:28:43 -0400 [thread overview]
Message-ID: <4FF06CAB.9020800@redhat.com> (raw)
John,
I was hit by the futex issue as well. I saw your patch and quickly did a test
with top-of-tree + your patch using your reproducer. I end up with warnings
from the smp_call_function code followed by all sorts of deadlocks, etc.
I haven't had a chance to debug and will start doing so shortly ...
intel-canoepass-02 login: [ 108.479555] Clock: inserting leap second 23:59:60 UTC
[ 108.485199] ------------[ cut here ]------------
[ 108.490368] WARNING: at kernel/smp.c:461 smp_call_function_many+0xbd/0x260()
[ 108.498236] Hardware name: S2600CP
[ 108.502060] Modules linked in: nfs nfs_acl auth_rpcgss fscache lockd sunrpc
kvm_intel igb coretemp kvm ixgbe ptp pps_core ioatdma mdio tpm_tis crc32c_intel
wmi joydev dca tpm lpc_ich ghash_clmulni_intel sb_edac mfd_core edac_core
i2c_i801 microcode pcspkr tpm_bios hid_generic isci libsas scsi_transport_sas
mgag200 i2c_algo_bit drm_kms_helper ttm drm i2c_core [last unloaded: scsi_wait_scan]
[ 108.540561] Pid: 1328, comm: leaptest Not tainted 3.5.0-rc4+ #4
[ 108.547169] Hypervisor: no hypervisor
[ 108.551273] Call Trace:
[ 108.554019] <IRQ> [<ffffffff8105814f>] warn_slowpath_common+0x7f/0xc0
[ 108.561398] [<ffffffff810581aa>] warn_slowpath_null+0x1a/0x20
[ 108.567911] [<ffffffff810b39bd>] smp_call_function_many+0xbd/0x260
[ 108.574931] [<ffffffff8107e960>] ? hrtimer_wakeup+0x30/0x30
[ 108.581242] [<ffffffff8107e960>] ? hrtimer_wakeup+0x30/0x30
[ 108.587560] [<ffffffff810b3cb2>] smp_call_function+0x22/0x30
[ 108.593982] [<ffffffff810b3d18>] on_each_cpu+0x28/0x70
[ 108.599825] [<ffffffff8107ef7c>] clock_was_set+0x1c/0x30
[ 108.605847] [<ffffffff810a71d5>] do_timer+0x315/0x570
[ 108.611592] [<ffffffff810adb18>] tick_do_update_jiffies64+0x78/0xc0
[ 108.618680] [<ffffffff810add28>] tick_sched_timer+0xb8/0xc0
[ 108.624991] [<ffffffff8107ed03>] __run_hrtimer+0x73/0x1d0
[ 108.631111] [<ffffffff810adc70>] ? tick_nohz_handler+0x110/0x110
[ 108.637908] [<ffffffff8107f5d7>] hrtimer_interrupt+0xd7/0x1f0
[ 108.644447] [<ffffffff81610c19>] smp_apic_timer_interrupt+0x69/0x99
[ 108.651550] [<ffffffff8160f98a>] apic_timer_interrupt+0x6a/0x70
[ 108.658255] <EOI>
next reply other threads:[~2012-07-01 15:28 UTC|newest]
Thread overview: 12+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-07-01 15:28 Prarit Bhargava [this message]
2012-07-01 16:56 ` [PATCH] [RFC] Potential fix for leapsecond caused futex related load spikes Prarit Bhargava
2012-07-01 17:28 ` John Stultz
2012-07-02 10:16 ` Richard Cochran
2012-07-02 16:58 ` John Stultz
2012-07-02 20:08 ` Sytse Wielinga
2012-07-03 9:23 ` Richard Cochran
2012-07-03 12:05 ` Sytse Wielinga
2012-07-03 13:41 ` Richard Cochran
-- strict thread matches above, loose matches on Subject: below --
2012-07-01 9:36 John Stultz
2012-07-01 9:42 ` John Stultz
2012-07-01 12:00 ` Jan Ceuleers
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=4FF06CAB.9020800@redhat.com \
--to=prarit@redhat.com \
--cc=johnstul@us.ibm.com \
--cc=linux-kernel@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).