netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Peter P Waskiewicz Jr <peter.p.waskiewicz.jr@intel.com>
To: Thomas Gleixner <tglx@linutronix.de>
Cc: "davem@davemloft.net" <davem@davemloft.net>,
	"arjan@linux.jf.intel.com" <arjan@linux.jf.intel.com>,
	"netdev@vger.kernel.org" <netdev@vger.kernel.org>,
	"linux-kernel@vger.kernel.org" <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH RFC: linux-next 1/2] irq: Add CPU mask affinity hint callback framework
Date: Thu, 29 Apr 2010 10:59:06 -0700	[thread overview]
Message-ID: <1272563946.9614.1.camel@localhost> (raw)
In-Reply-To: <alpine.LFD.2.00.1004271809080.2951@localhost.localdomain>

On Wed, 2010-04-28 at 09:45 -0700, Thomas Gleixner wrote:
> B1;2005;0cPeter,
> 
> On Tue, 27 Apr 2010, Peter P Waskiewicz Jr wrote:
> > On Tue, 27 Apr 2010, Thomas Gleixner wrote:
> > > On Sun, 18 Apr 2010, Peter P Waskiewicz Jr wrote:
> > > > +/**
> > > > + * struct irqaffinityhint - per interrupt affinity helper
> > > > + * @callback:	device driver callback function
> > > > + * @dev:	reference for the affected device
> > > > + * @irq:	interrupt number
> > > > + */
> > > > +struct irqaffinityhint {
> > > > +	irq_affinity_hint_t callback;
> > > > +	void *dev;
> > > > +	int irq;
> > > > +};
> > > 
> > > Why do you need that extra data structure ? The device and the irq
> > > number are known, so all you need is the callback itself. So no need
> > > for allocating memory ....
> > 
> > When I register the function callback with the interrupt layer, I need to
> > know what device structures to reference back in the driver.  In other words,
> > if I call into an underlying driver with just an interrupt number, then I
> > have no way at getting at the dev structures (netdevice for me, plus my
> > private adapter structures), unless I declare them globally (yuck).
> 
> Grr, I knew that I missed something. That'll teach me to review
> patches before the coffee has reached my brain cells :)
> 
> > I had a different approach before this one where I assumed the device from
> > the irq handler callback was safe to use for the device in this new callback.
> > I didn't feel really great about that, since it's an implicit assumption that
> > could cause things to go sideways really quickly.
> >
> > Let me know what you think either way.  I'm certainly willing to make a
> > change, I just don't know at this point what's the safest approach from what
> > I currently have.
> 
> So you need a reference to your device, so what about the following:
> 
> struct irq_affinity_hint;
> 
> struct irq_affinity_hint {
>        unsigned int (*callback)(unsigned int irq, struct irq_affinity_hint *hint,
> 				cpumask_var_t *mask);
> }
> 
> Now you embed that struct into your device private data structure and
> you get the reference to it back in the callback function. No extra
> kmalloc/kfree, less code.

Good idea!  I'll roll that into my new version.

> One other thing I noticed, but forgot to comment on:
> 
> > +static int irq_affinity_hint_proc_show(struct seq_file *m, void *v)
> > +{
> > +	struct irq_desc *desc = irq_to_desc((long)m->private);
> > +	struct cpumask mask;
> > +	unsigned int ret = 0;
> 
>  Why do we return 0, when there is no callback and no hint available ?

I initialized it to 0 to remove a compiler warning; I can put more
thought into it and assign a more appropriate return value.

> > +
> 
>   We don't want to have cpumask enforced on stack. Please make that:
> 
>      	cpumask_var_t mask;
> 
> 	if (!alloc_cpumask_var(&mask, GFP_KERNEL))
> 	       return -ENOMEM;

I'll roll this into my next version.

> > +	if (desc->hint && desc->hint->callback) {
> 
>   The access to desc-> needs to be protected with
>   desc->lock. Otherwise you might race with a callback unregister.

Good point.  I'll fix this.

> > +		ret = desc->hint->callback(&mask, (long)m->private,
> > +		                           desc->hint->dev);
> > +		if (!ret)
> > +			seq_cpumask(m, &mask);
> > +	}
> > +
> > +	seq_putc(m, '\n');
> > +	return ret;
> > +}
> 
> Thanks,
> 

Thanks for the feedback.  I'll have the updated patches for review soon.

-PJ


  reply	other threads:[~2010-04-30 17:21 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-04-19  4:57 [PATCH RFC: linux-next 1/2] irq: Add CPU mask affinity hint callback framework Peter P Waskiewicz Jr
2010-04-19  4:58 ` [PATCH RFC: linux-next 2/2] ixgbe: Example usage of the new IRQ affinity_hint callback Peter P Waskiewicz Jr
2010-04-21  2:28 ` [PATCH RFC: linux-next 1/2] irq: Add CPU mask affinity hint callback framework David Miller
2010-04-27 12:32 ` Thomas Gleixner
2010-04-27 16:04   ` Peter P Waskiewicz Jr
2010-04-28 16:45     ` Thomas Gleixner
2010-04-29 17:59       ` Peter P Waskiewicz Jr [this message]
2010-04-29 19:48         ` Thomas Gleixner
2010-04-29 20:28           ` Peter P Waskiewicz Jr
2010-04-29 20:39             ` Thomas Gleixner
2010-04-29 21:29               ` Peter P Waskiewicz Jr

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1272563946.9614.1.camel@localhost \
    --to=peter.p.waskiewicz.jr@intel.com \
    --cc=arjan@linux.jf.intel.com \
    --cc=davem@davemloft.net \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).