linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Prarit Bhargava <prarit@redhat.com>
To: rui wang <ruiv.wang@gmail.com>
Cc: linux-kernel@vger.kernel.org, x86@kernel.org, "Chen,
	Gong" <gong.chen@intel.com>, "Yu, Fenghua" <fenghua.yu@intel.com>
Subject: Re: [PATCH] x86: Add check for number of available vectors before CPU down
Date: Thu, 19 Dec 2013 08:33:56 -0500	[thread overview]
Message-ID: <52B2F5C4.1080202@redhat.com> (raw)
In-Reply-To: <CANVTcTbCeQKa_cSY6XMg3OwMA2GJJCNWuExL_qQsBYe_Ozj6BQ@mail.gmail.com>



On 12/19/2013 02:19 AM, rui wang wrote:
> On 12/19/13, Prarit Bhargava <prarit@redhat.com> wrote:
>>
>>
>> On 12/03/2013 09:48 PM, rui wang wrote:
>>> On 11/20/13, Prarit Bhargava <prarit@redhat.com> wrote:
>>> Have you considered the case when an IRQ is destined to more than one CPU?
>>> e.g.:
>>>
>>> bash# cat /proc/irq/89/smp_affinity_list
>>> 30,62
>>> bash#
>>>
>>> In this case offlining CPU30 does not seem to require an empty vector
>>> slot. It seems that we only need to change the affinity mask of irq89.
>>> Your check_vectors() assumed that each irq on the offlining cpu
>>> requires a new vector slot.
>>>
>>
>> Rui,
>>
>> The smp_affinity_list only indicates a preferred destination of the IRQ, not
>> the
>> *actual* location of the CPU.  So the IRQ is on one of cpu 30 or 62 but not
>> both
>> simultaneously.
>>
> 
> It depends on how IOAPIC (or MSI/MSIx) is configured. An IRQ can be
> simultaneously broadcast to all destination CPUs (Fixed Mode) or
> delivered to the CPU with the lowest priority task (Lowest Priority
> Mode). It's programmed in the Delivery Mode bits of the IOAPIC's IO
> Redirection table registers, or the Message Data Register in the case
> of MSI/MSIx

Hmm ... I didn't realize that this was a possibility.  I'll go back and rework
the patch.

Thanks for the info Rui!

P.

> 
>> If the case is that 62 is being brought down, then the smp_affinity mask
>> will be
>> updated to reflect only cpu 30 (and vice versa).
>>
> 
> Yes the affinity mask should be updated. But if it was destined to
> more than one CPU, your "this_counter" does not seem to count the
> right numbers. Are you saying that smp_affinity mask is broken on
> Linux so that there's no way to configure an IRQ to target more than
> one CPU?
> 
> Thanks
> Rui
> 
>> P.
>>

  reply	other threads:[~2013-12-19 13:34 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2013-11-19 16:24 [PATCH] x86: Add check for number of available vectors before CPU down Prarit Bhargava
2013-12-03 23:42 ` Yu, Fenghua
2013-12-04 15:15   ` Prarit Bhargava
2013-12-04  2:48 ` rui wang
2013-12-18 19:28   ` Prarit Bhargava
2013-12-19  7:19     ` rui wang
2013-12-19 13:33       ` Prarit Bhargava [this message]
2013-12-19 19:03       ` Prarit Bhargava
  -- strict thread matches above, loose matches on Subject: below --
2013-12-02 13:18 Prarit Bhargava
2013-11-11 16:00 Prarit Bhargava

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=52B2F5C4.1080202@redhat.com \
    --to=prarit@redhat.com \
    --cc=fenghua.yu@intel.com \
    --cc=gong.chen@intel.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ruiv.wang@gmail.com \
    --cc=x86@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).