public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Paul Jackson <pj@sgi.com>
To: Paul Mackerras <paulus@samba.org>
Cc: torvalds@osdl.org, akpm@osdl.org, linux-kernel@vger.kernel.org
Subject: Re: cpu_exclusive sched domains fix broke ppc64
Date: Wed, 24 Aug 2005 05:04:54 -0700	[thread overview]
Message-ID: <20050824050454.012f8af1.pj@sgi.com> (raw)
In-Reply-To: <17164.11361.437380.179789@cargo.ozlabs.ibm.com>

Paul Mackerras wrote:
> I'm not sure what the best way to fix this is

Thank-you for reporting this.  Likely the best way to fix this for now,
since we are late in a release (Linus will probably want to wack me
upside the head for breaking his build ;) is to leave the
node_to_cpumask and for_each_cpu_mask exactly as they are, and have the
code that my cpu_exclusive sched domain patch added make a local copy
of the cpumask.

I just sent off a patch to do this - quite untested so far.

I am trying now to get fire up crosstools to verify the build.
But if you can get it to build anytime soon, let me know.  My
crosstools are rusty -- it might take me a bit to resuscitate them.

I also am not sure what is the best way to fix this detail with
node_to_cpumask and for_each_cpu_mask in the long term.  The choices I
see are:

 1) Leave it be - which makes it easy trip the build bug I hit,
    due to the different styles of node_to_cpumask, inline or
    macro, on different archs.

 2) Make node_to_cpumask a macro on all archs, though that
    makes it even easier than it is now to write code that
    appears to modify a local variable, but actually modifies
    some global array of the per-node cpumasks, which could
    lead to some juicy runtime bugs.

 3) Make node_to_cpumask an inline on all archs, though that might
    force a local stack copy of a cpumask in places that might
    be performance critical on arch's with big cpumasks.

 4) Perhaps some more subtle combination of macros/inlines
    can be all things to all arch's.

I'm not going to unravel the above tonight.

> it seems unfortunate that for_each_cpu_mask
> requires the mask to be an lvalue, but that isn't documented anywhere
> that I can see.

Are you saying that it's unfortunate that for_each_cpu_mask requires
an lvalue, or that it's unfortunate that this isn't documented?

Or both ;).

-- 
                  I won't rest till it's the best ...
                  Programmer, Linux Scalability
                  Paul Jackson <pj@sgi.com> 1.925.600.0401

       reply	other threads:[~2005-08-24 12:05 UTC|newest]

Thread overview: 4+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <17164.11361.437380.179789@cargo.ozlabs.ibm.com>
2005-08-24 12:04 ` Paul Jackson [this message]
2005-08-25  5:32 ` cpu_exclusive sched domains fix broke ppc64 Paul Jackson
2005-08-25  5:40   ` Paul Mackerras
2005-08-25  5:48     ` Paul Jackson

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20050824050454.012f8af1.pj@sgi.com \
    --to=pj@sgi.com \
    --cc=akpm@osdl.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=paulus@samba.org \
    --cc=torvalds@osdl.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox