From: Nikunj A Dadhania <nikunj@linux.vnet.ibm.com>
To: David Gibson <david@gibson.dropbear.id.au>
Cc: qemu-ppc@nongnu.org, qemu-devel@nongnu.org, clg@kaod.org,
bharata@linux.vnet.ibm.com, benh@kernel.crashing.org
Subject: Re: [Qemu-devel] [PATCH] ppc/pnv: fix cores per chip for multiple cpus
Date: Wed, 20 Sep 2017 10:43:19 +0530 [thread overview]
Message-ID: <87y3pagdg0.fsf@abhimanyu.i-did-not-set--mail-host-address--so-tickle-me> (raw)
In-Reply-To: <20170920045524.GH5520@umbus.fritz.box>
David Gibson <david@gibson.dropbear.id.au> writes:
> On Wed, Sep 20, 2017 at 09:50:24AM +0530, Nikunj A Dadhania wrote:
>> David Gibson <david@gibson.dropbear.id.au> writes:
>>
>> > On Fri, Sep 15, 2017 at 02:39:16PM +0530, Nikunj A Dadhania wrote:
>> >> David Gibson <david@gibson.dropbear.id.au> writes:
>> >>
>> >> > On Fri, Sep 15, 2017 at 01:53:15PM +0530, Nikunj A Dadhania wrote:
>> >> >> David Gibson <david@gibson.dropbear.id.au> writes:
>> >> >>
>> >> >> >>
>> >> >> >> I thought, I am doing the same here for PowerNV, number of online cores
>> >> >> >> is equal to initial online vcpus / threads per core
>> >> >> >>
>> >> >> >> int boot_cores_nr = smp_cpus / smp_threads;
>> >> >> >>
>> >> >> >> Only difference that I see in PowerNV is that we have multiple chips
>> >> >> >> (max 2, at the moment)
>> >> >> >>
>> >> >> >> cores_per_chip = smp_cpus / (smp_threads * pnv->num_chips);
>> >> >> >
>> >> >> > This doesn't make sense to me. Cores per chip should *always* equal
>> >> >> > smp_cores, you shouldn't need another calculation for it.
>> >> >> >
>> >> >> >> And in case user has provided sane smp_cores, we use it.
>> >> >> >
>> >> >> > If smp_cores isn't sane, you should simply reject it, not try to fix
>> >> >> > it. That's just asking for confusion.
>> >> >>
>> >> >> This is the case where the user does not provide a topology(which is a
>> >> >> valid scenario), not sure we should reject it. So qemu defaults
>> >> >> smp_cores/smt_threads to 1. I think it makes sense to over-ride.
>> >> >
>> >> > If you can find a way to override it by altering smp_cores when it's
>> >> > not explicitly specified, then ok.
>> >>
>> >> Should I change the global smp_cores here as well ?
>> >
>> > I'm pretty uneasy with that option.
>>
>> Me too.
>>
>> > It would take a fair bit of checking to ensure that changing smp_cores
>> > is safe here. An easier to verify option would be to make the generic
>> > logic which splits up an unspecified -smp N into cores and sockets
>> > more flexible, possibly based on machine options for max values.
>> >
>> > That might still be more trouble than its worth.
>>
>> I think the current approach is the simplest and less intrusive, as we
>> are handling a case where user has not bothered to provide a detailed
>> topology, the best we can do is create single threaded cores equal to
>> number of cores.
>
> No, sorry. Having smp_cores not correspond to the number of cores per
> chip in all cases is just not ok. Add an error message if the
> topology isn't workable for powernv by all means. But users having to
> use a longer command line is better than breaking basic assumptions
> about what numbers reflect what topology.
Sorry to ask again, as I am still not convinced, we do similar
adjustment in spapr where the user did not provide the number of cores,
but qemu assumes them as single threaded cores and created
cores(boot_cores_nr) that were not same as smp_cores ?
Regards
Nikunj
next prev parent reply other threads:[~2017-09-20 5:13 UTC|newest]
Thread overview: 39+ messages / expand[flat|nested] mbox.gz Atom feed top
2017-09-06 8:27 [Qemu-devel] [PATCH] ppc/pnv: fix cores per chip for multiple cpus Nikunj A Dadhania
2017-09-08 6:38 ` Cédric Le Goater
2017-09-09 7:02 ` David Gibson
2017-09-11 5:10 ` Nikunj A Dadhania
2017-09-13 7:35 ` David Gibson
2017-09-14 5:12 ` Nikunj A Dadhania
2017-09-15 6:48 ` David Gibson
2017-09-15 8:23 ` Nikunj A Dadhania
2017-09-15 8:51 ` David Gibson
2017-09-15 9:09 ` Nikunj A Dadhania
2017-09-19 8:24 ` David Gibson
2017-09-20 4:20 ` Nikunj A Dadhania
2017-09-20 4:55 ` David Gibson
2017-09-20 5:13 ` Nikunj A Dadhania [this message]
2017-09-20 6:17 ` David Gibson
2017-09-20 6:40 ` Nikunj A Dadhania
2017-09-20 6:50 ` Nikunj A Dadhania
2017-09-20 6:57 ` David Gibson
2017-09-20 7:18 ` Nikunj A Dadhania
2017-09-20 8:12 ` Cédric Le Goater
2017-09-20 11:53 ` David Gibson
2017-09-21 3:54 ` Nikunj A Dadhania
2017-09-21 5:31 ` David Gibson
2017-09-22 6:00 ` Nikunj A Dadhania
2017-09-22 6:07 ` Cédric Le Goater
2017-09-22 10:12 ` David Gibson
2017-09-22 10:46 ` Cédric Le Goater
2017-09-22 11:20 ` David Gibson
2017-09-22 11:37 ` Cédric Le Goater
2017-09-22 11:49 ` David Gibson
2017-09-22 11:46 ` [Qemu-devel] [Qemu-ppc] " Greg Kurz
2017-09-22 10:56 ` [Qemu-devel] " Cédric Le Goater
2017-09-22 11:06 ` Nikunj A Dadhania
2017-09-22 11:36 ` David Gibson
2017-09-21 6:04 ` Cédric Le Goater
2017-09-21 7:42 ` Igor Mammedov
2017-09-22 10:09 ` David Gibson
2017-09-22 10:08 ` David Gibson
2017-09-22 10:52 ` Cédric Le Goater
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=87y3pagdg0.fsf@abhimanyu.i-did-not-set--mail-host-address--so-tickle-me \
--to=nikunj@linux.vnet.ibm.com \
--cc=benh@kernel.crashing.org \
--cc=bharata@linux.vnet.ibm.com \
--cc=clg@kaod.org \
--cc=david@gibson.dropbear.id.au \
--cc=qemu-devel@nongnu.org \
--cc=qemu-ppc@nongnu.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).