From: Jiang Liu <jiang.liu@linux.intel.com>
To: Amir Vadai <amirv@mellanox.com>
Cc: Thomas Gleixner <tglx@linutronix.de>,
Bjorn Helgaas <bhelgaas@google.com>,
Benjamin Herrenschmidt <benh@kernel.crashing.org>,
Ingo Molnar <mingo@redhat.com>, "H. Peter Anvin" <hpa@zytor.com>,
"Rafael J. Wysocki" <rjw@rjwysocki.net>,
Randy Dunlap <rdunlap@infradead.org>,
Yinghai Lu <yinghai@kernel.org>, Borislav Petkov <bp@alien8.de>,
Ido Shamay <idos@mellanox.com>,
"David S. Miller" <davem@davemloft.net>,
Or Gerlitz <ogerlitz@mellanox.com>,
Eric Dumazet <edumazet@google.com>,
Hadar Hen Zion <hadarh@mellanox.com>,
Eran Ben Elisha <eranbe@mellanox.com>,
Joe Perches <joe@perches.com>,
Saeed Mahameed <saeedm@mellanox.com>,
Matan Barak <matanb@mellanox.com>,
Konrad Rzeszutek Wilk <konrad.wilk@oracle.com>,
Tony Luck <tony.luck@intel.com>,
x86@kernel.org,
"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>,
linux-pci@vger.kernel.org, linux-acpi@vger.kernel.org,
netdev <netdev@vger.kernel.org>
Subject: Re: [RFC v1 07/11] net/mlx4: Cache irq_desc->affinity instead of irq_desc
Date: Mon, 04 May 2015 22:00:38 +0800 [thread overview]
Message-ID: <55477B86.70705@linux.intel.com> (raw)
In-Reply-To: <CAPcc5Ph4pk+4aL-PduvF67YU-A+vn2bfx6v3mr-tbY58zs7sZw@mail.gmail.com>
On 2015/5/4 20:10, Amir Vadai wrote:
> On Mon, May 4, 2015 at 6:15 AM, Jiang Liu <jiang.liu@linux.intel.com> wrote:
>> The field 'affinity' in irq_desc won't change once the irq_desc data
>> structure is created. So cache irq_desc->affinity instead of irq_desc.
>> This also helps to hide struct irq_desc from device drivers.
>
> Hi Jiang,
>
> I might not understand the new changes irq core, but up until now
> affinity was changed when the user changed it through
> /proc/irq/<IRQ>/smp_affinity.
> This code is monitoring the affinity from the napi_poll context to
> detect affinity changes, and prevent napi from keep running on the
> wrong CPU.
> Therefore, the affinity can't be cached at the beginning. Please
> revert this caching.
Hi Amir,
Thanks for review:) We want to hide irq_desc implementation
details from device drivers, so made these changes.
Function irq_get_affinity_mask() returns 'struct cpumask *'
and we cache the returned pointer. On the other hand, user may change
IRQ affinity through /proc/irq/<IRQ>/smp_affinity, but that only
changes the bitmap pointed to by the cached pointer and won't change
the pointer itself. So it should always return the latest affinity
setting by calling cpumask_test_cpu(cpu_curr, cq->irq_affinity).
Or am I missing something here?
Thanks!
Gerry
>
> Thanks,
> Amir
>
next prev parent reply other threads:[~2015-05-04 14:00 UTC|newest]
Thread overview: 27+ messages / expand[flat|nested] mbox.gz Atom feed top
2015-05-04 3:15 [RFC v1 00/11] Split struct irq_data into common part and per-chip part Jiang Liu
2015-05-04 3:15 ` [RFC v1 01/11] genirq: Introduce struct irq_common_data to host shared irq data Jiang Liu
2015-05-08 2:23 ` Yun Wu (Abel)
2015-05-18 2:58 ` Jiang Liu
2015-05-04 3:15 ` [RFC v1 02/11] genirq: Move field 'node' from struct irq_data into struct irq_common_data Jiang Liu
2015-05-08 2:29 ` Yun Wu (Abel)
2015-05-08 3:04 ` Yun Wu (Abel)
2015-05-15 20:42 ` Thomas Gleixner
2015-05-04 3:15 ` [RFC v1 03/11] genirq: Use CONFIG_NUMA instead of CONFIG_SMP to guard irq_common_data.node Jiang Liu
2015-05-15 20:44 ` Thomas Gleixner
2015-05-18 5:17 ` Jiang Liu
2015-05-04 3:15 ` [RFC v1 04/11] genirq: Move field 'handler_data' from struct irq_data into struct irq_common_data Jiang Liu
2015-05-04 3:15 ` [RFC v1 05/11] mn10300: Fix incorrect use of data->affinity Jiang Liu
2015-05-04 3:15 ` [RFC v1 07/11] net/mlx4: Cache irq_desc->affinity instead of irq_desc Jiang Liu
2015-05-04 12:10 ` Amir Vadai
2015-05-04 14:00 ` Jiang Liu [this message]
2015-05-05 9:07 ` Amir Vadai
2015-05-04 15:10 ` Thomas Gleixner
2015-05-05 9:17 ` Amir Vadai
2015-05-05 14:53 ` Thomas Gleixner
2015-05-07 10:41 ` Amir Vadai
2015-05-04 3:15 ` [RFC v1 08/11] genirq: Move field 'affinity' from struct irq_data into struct irq_common_data Jiang Liu
2015-05-04 3:15 ` [RFC v1 09/11] genirq: Use helper function to access irq_data->msi_desc Jiang Liu
2015-05-04 3:15 ` [RFC v1 10/11] genirq: Move field 'msi_desc' from struct irq_data into struct irq_common_data Jiang Liu
2015-05-04 3:15 ` [RFC v1 11/11] genirq: Pass irq_data to helper function __irq_set_chip_handler_name_locked() Jiang Liu
2015-05-15 20:48 ` Thomas Gleixner
2015-05-15 20:57 ` [RFC v1 00/11] Split struct irq_data into common part and per-chip part Thomas Gleixner
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=55477B86.70705@linux.intel.com \
--to=jiang.liu@linux.intel.com \
--cc=amirv@mellanox.com \
--cc=benh@kernel.crashing.org \
--cc=bhelgaas@google.com \
--cc=bp@alien8.de \
--cc=davem@davemloft.net \
--cc=edumazet@google.com \
--cc=eranbe@mellanox.com \
--cc=hadarh@mellanox.com \
--cc=hpa@zytor.com \
--cc=idos@mellanox.com \
--cc=joe@perches.com \
--cc=konrad.wilk@oracle.com \
--cc=linux-acpi@vger.kernel.org \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-pci@vger.kernel.org \
--cc=matanb@mellanox.com \
--cc=mingo@redhat.com \
--cc=netdev@vger.kernel.org \
--cc=ogerlitz@mellanox.com \
--cc=rdunlap@infradead.org \
--cc=rjw@rjwysocki.net \
--cc=saeedm@mellanox.com \
--cc=tglx@linutronix.de \
--cc=tony.luck@intel.com \
--cc=x86@kernel.org \
--cc=yinghai@kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).