From: Andi Kleen <andi@firstfloor.org>
To: Nick Piggin <npiggin@suse.de>
Cc: Andi Kleen <andi@firstfloor.org>,
Srivatsa Vaddagiri <vatsa@in.ibm.com>,
Avi Kivity <avi@redhat.com>, Gleb Natapov <gleb@redhat.com>,
linux-kernel@vger.kernel.org, kvm@vger.kernel.org, hpa@zytor.com,
mingo@elte.hu, tglx@linutronix.de, mtosatti@redhat.com
Subject: Re: [PATCH] use unfair spinlock when running on hypervisor.
Date: Thu, 3 Jun 2010 19:25:34 +0200 [thread overview]
Message-ID: <20100603172534.GF4166@basil.fritz.box> (raw)
In-Reply-To: <20100603153518.GP6822@laptop>
> That would certainly be a part of it, I'm sure they have stronger
> fairness and guarantees at the expense of some performance. We saw the
> spinlock starvation first on 8-16 core Opterons I think, wheras Altix
> had been over 1024 cores and POWER7 1024 threads now apparently without
> reported problems.
I suppose P7 handles that in the HV through the pvcall.
Altix AFAIK has special hardware for this in the interconnect,
but as individual nodes get larger and have more cores you'll start
seeing it there too.
In general we now have the problem that with increasing core counts
per socket each NUMA node can be a fairly large SMP by itself
and several of the old SMP scalability problems that were fixed
by having per node datastructures are back now.
For example this is a serious problem with the zone locks in some
workloads now on 8core+HT systems.
> So I think actively enforcing fairness at the lock level would be
> required. Something like if it is detected that a core is not making
I suppose how that exactly works is IBM's secret sauce. Anyways
as long as there are no reports I wouldn't worry about it.
-Andi
--
ak@linux.intel.com -- Speaking for myself only.
next prev parent reply other threads:[~2010-06-03 17:25 UTC|newest]
Thread overview: 34+ messages / expand[flat|nested] mbox.gz Atom feed top
2010-06-01 9:35 [PATCH] use unfair spinlock when running on hypervisor Gleb Natapov
2010-06-01 15:53 ` Andi Kleen
2010-06-01 16:24 ` Gleb Natapov
2010-06-01 16:38 ` Andi Kleen
2010-06-01 16:52 ` Avi Kivity
2010-06-01 17:27 ` Andi Kleen
2010-06-02 2:51 ` Avi Kivity
2010-06-02 5:26 ` Srivatsa Vaddagiri
2010-06-02 8:50 ` Andi Kleen
2010-06-02 9:00 ` Avi Kivity
2010-06-03 4:20 ` Srivatsa Vaddagiri
2010-06-03 4:51 ` Eric Dumazet
2010-06-03 5:38 ` Srivatsa Vaddagiri
2010-06-03 8:52 ` Andi Kleen
2010-06-03 9:26 ` Srivatsa Vaddagiri
2010-06-03 10:22 ` Nick Piggin
2010-06-03 10:38 ` Nick Piggin
2010-06-03 12:04 ` Srivatsa Vaddagiri
2010-06-03 12:38 ` Nick Piggin
2010-06-03 12:58 ` Srivatsa Vaddagiri
2010-06-03 13:04 ` Srivatsa Vaddagiri
2010-06-03 13:45 ` Nick Piggin
2010-06-03 14:48 ` Srivatsa Vaddagiri
2010-06-03 15:17 ` Andi Kleen
2010-06-03 15:35 ` Nick Piggin
2010-06-03 17:25 ` Andi Kleen [this message]
2010-06-01 17:39 ` Valdis.Kletnieks
2010-06-02 2:46 ` Avi Kivity
2010-06-02 7:39 ` H. Peter Anvin
2010-06-01 17:54 ` john cooper
2010-06-01 19:36 ` Andi Kleen
2010-06-03 11:06 ` David Woodhouse
2010-06-03 15:15 ` Andi Kleen
2010-06-01 21:39 ` Eric Dumazet
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20100603172534.GF4166@basil.fritz.box \
--to=andi@firstfloor.org \
--cc=avi@redhat.com \
--cc=gleb@redhat.com \
--cc=hpa@zytor.com \
--cc=kvm@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=mingo@elte.hu \
--cc=mtosatti@redhat.com \
--cc=npiggin@suse.de \
--cc=tglx@linutronix.de \
--cc=vatsa@in.ibm.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).