From: Alexey Kardashevskiy <aik@ozlabs.ru>
To: Alexander Graf <agraf@suse.de>
Cc: Mike Day <ncmike@ncultra.org>, Paul Mackerras <paulus@samba.org>,
"qemu-ppc@nongnu.org" <qemu-ppc@nongnu.org>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>
Subject: Re: [Qemu-devel] [RFC PATCH v2] PPC: smp: autodetect numbers of threads per core
Date: Sat, 11 Jan 2014 01:13:22 +1100 [thread overview]
Message-ID: <52D00002.7020208@ozlabs.ru> (raw)
In-Reply-To: <03847015-B363-4234-BF34-BB034D0BBE08@suse.de>
On 01/11/2014 01:00 AM, Alexander Graf wrote:
>
> On 10.01.2014, at 14:42, Alexey Kardashevskiy <aik@ozlabs.ru> wrote:
>
>> On 01/11/2014 12:28 AM, Alexander Graf wrote:
>>>
>>> On 10.01.2014, at 14:03, Mike Day <ncmike@ncultra.org> wrote:
>>>
>>>>
>>>> Alexey Kardashevskiy <aik@ozlabs.ru> writes:
>>>>
>>>>> On 01/10/2014 10:40 AM, Alexander Graf wrote:
>>>>>>
>>>>>
>>>>>> What if we make the max thread count a property of our cpu
>>>>>> class? The we
>>>>> can add a threads=max option which will be identical between kvm
>>>>> and tcg.
>>>>>
>>>>>
>>>>> You lost me here :) Right now the sequence is: 1. smp_parse 2.
>>>>> config_accelerator 3. machine_init
>>>>>
>>>>> I proposed 1. config_accelerator - reads max threads from KVM
>>>>> (and initializes "host" type) 2. smp_parse - does the parsing
>>>>> using smp_threads tweaked in 1) 3. machine_init - creates CPUs
>>>>> which may or may be not "host".
>>>>
>>>> The patch as it its now is very simple and well-contained. I
>>>> wonder how much it would expand if we added a max thread count to
>>>> the cpu class. It seems like the need for a max thread count is
>>>> idiomatic to powerpc.
>>>
>>> It's only ever useful on IBM POWER. Any other PowerPC system can
>>> partition vcpus by host threads, it's only IBM POWER hardware that's
>>> as broken as it is.
>>>
>>
>>> But really, what problem are you trying to solve here? Do you have
>>> users that don't understand the constraints they have? You will
>>> still have this even with this patch, as if you do threads=max as
>>> default for KVM (which is a bad idea, because it diverges from TCG)
>>> -smp 5 would still allocate 2 host cores on p7 and you're not
>>> effectively using your resources.
>>
>> I am not changing the default here. I am adding an ability to choose
>> the maximum.
>>
>>
>>> With the patch you're also not taking compat thread count into
>>> account. A POWER7 compat system would still need to manually specify
>>> threads=4, no?
>>
>> Nope. I will need to smp_threads=min(smp_threads, 4) (and I do this in
>> my patches which I think I already posted but I need to repost them)
>> so if it is 1 by default, it will be still 1.
>
> Bleks. So suddenly we have one piece of code overriding magic that
> happens in another piece of code. This sounds like in 5 years from now
> nobody will have a clue what's going on here anymore :).
git blame will know.
> Can't we determine the number of "default threads" at a common place,
> preferably derived from cpu type?
We can do anything. I asked how exactly as I really (really) do not
understand the details.
> I do remember that Anthony wanted to introduce a concept for "CPU
> sockets" once where you just plug in a 6-core p7 into a slot and that
> automatically gives you 6x4 threads of the specific cpu type. That's
> probably overkill for this scenario though :). Be aware that you're not
> the first person thinking along these lines.
Sure I am not.
>>> I do see that the user experience is slightly suboptimal, but by
>>> creating this special case there's a good chance you're doing more
>>> harm than good.
>>
>> Again. I do not change the default. I add an additional option (which
>> is false by default) to use the maximum of the CPU. What harm can it
>> possibly make?
>>
>> I am definitely missing your point, sorry.
>
> In which case would this help anyone? I'm serious, please describe
> exactly the reason for this patch. I'm sure you had people ask you to
> write it or it fixes a nuisance you have yourself.
Right now by default people use 1 (one) threads from each core. -smp 8
takes 8 CPU cores instead of 8 threads of one CPU. How is it good?
And I'll ask again - how can the user know the maximum number of threads
per core otherwise than guessing it from the CPU model? Does any user space
tool print it?
--
Alexey
next prev parent reply other threads:[~2014-01-10 14:13 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2014-01-09 5:34 [Qemu-devel] [RFC PATCH v2] PPC: smp: autodetect numbers of threads per core Alexey Kardashevskiy
2014-01-09 21:00 ` Mike Day
2014-01-09 22:12 ` Alexey Kardashevskiy
2014-01-09 23:40 ` Alexander Graf
2014-01-10 1:39 ` Alexey Kardashevskiy
2014-01-10 13:03 ` Mike Day
2014-01-10 13:28 ` Alexander Graf
2014-01-10 13:42 ` Alexey Kardashevskiy
2014-01-10 14:00 ` Alexander Graf
2014-01-10 14:13 ` Alexey Kardashevskiy [this message]
2014-01-10 14:20 ` Alexander Graf
2014-01-10 14:21 ` Mike Day
2014-01-10 14:25 ` Alexander Graf
2014-01-10 14:29 ` Mike Day
2014-01-10 14:35 ` Alexey Kardashevskiy
2014-01-10 14:12 ` Mike Day
2014-01-09 22:50 ` Scott Wood
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=52D00002.7020208@ozlabs.ru \
--to=aik@ozlabs.ru \
--cc=agraf@suse.de \
--cc=ncmike@ncultra.org \
--cc=paulus@samba.org \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).