From: Jarek Poplawski <jarkao2@gmail.com>
To: Neil Horman <nhorman@tuxdriver.com>
Cc: Stephen Hemminger <shemminger@vyatta.com>,
David Miller <davem@davemloft.net>,
netdev@vger.kernel.org
Subject: Re: [PATCH] netpoll: fix race on poll_list resulting in garbage entry
Date: Thu, 18 Dec 2008 09:04:14 +0000 [thread overview]
Message-ID: <20081218090414.GB8416@ff.dom.local> (raw)
In-Reply-To: <20081218011306.GA32191@hmsreliant.think-freely.org>
On Wed, Dec 17, 2008 at 08:13:06PM -0500, Neil Horman wrote:
...
> Since we migrated the napi polling infrastructure out of the net_device
> structure, the netif_rx_[prep|schedule|complete] api has taken a net_device
> structure pointer, which in all cases goes unused. This patch modifies the api
> to remove that parameter, and fixes up all the required call sites.
>
> I've obviously not tested it with all available NICS, but I built an
> allmodconfig sucessfully with no errors introduced, and booted a kernel with
> htis change on a few systems.
>
> Regards
> Neil
>
> Signed-off-by: Neil Horman <nhorman@tuxdriver.com>
...
> diff --git a/include/linux/netdevice.h b/include/linux/netdevice.h
> index e26f549..d2f692d 100644
> --- a/include/linux/netdevice.h
> +++ b/include/linux/netdevice.h
> @@ -1444,8 +1444,7 @@ static inline u32 netif_msg_init(int debug_value, int default_msg_enable_bits)
> }
>
> /* Test if receive needs to be scheduled but only if up */
> -static inline int netif_rx_schedule_prep(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline int netif_rx_schedule_prep(struct napi_struct *napi)
> {
> return napi_schedule_prep(napi);
> }
> @@ -1453,27 +1452,24 @@ static inline int netif_rx_schedule_prep(struct net_device *dev,
> /* Add interface to tail of rx poll list. This assumes that _prep has
> * already been called and returned 1.
> */
> -static inline void __netif_rx_schedule(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline void __netif_rx_schedule(struct napi_struct *napi)
> {
> __napi_schedule(napi);
> }
My proposal is to remove this duplication (in another patch/patches) by
renaming __napi_schedule() to __netif_rx_schedule() etc. (__netif_rx
instead of __napi as much as possible).
Jarek P.
>
> /* Try to reschedule poll. Called by irq handler. */
>
> -static inline void netif_rx_schedule(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline void netif_rx_schedule(struct napi_struct *napi)
> {
> - if (netif_rx_schedule_prep(dev, napi))
> - __netif_rx_schedule(dev, napi);
> + if (netif_rx_schedule_prep(napi))
> + __netif_rx_schedule(napi);
> }
>
> /* Try to reschedule poll. Called by dev->poll() after netif_rx_complete(). */
> -static inline int netif_rx_reschedule(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline int netif_rx_reschedule(struct napi_struct *napi)
> {
> if (napi_schedule_prep(napi)) {
> - __netif_rx_schedule(dev, napi);
> + __netif_rx_schedule(napi);
> return 1;
> }
> return 0;
> @@ -1482,8 +1478,7 @@ static inline int netif_rx_reschedule(struct net_device *dev,
> /* same as netif_rx_complete, except that local_irq_save(flags)
> * has already been issued
> */
> -static inline void __netif_rx_complete(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline void __netif_rx_complete(struct napi_struct *napi)
> {
> __napi_complete(napi);
> }
> @@ -1493,8 +1488,7 @@ static inline void __netif_rx_complete(struct net_device *dev,
> * it completes the work. The device cannot be out of poll list at this
> * moment, it is BUG().
> */
> -static inline void netif_rx_complete(struct net_device *dev,
> - struct napi_struct *napi)
> +static inline void netif_rx_complete(struct napi_struct *napi)
> {
> unsigned long flags;
>
> @@ -1505,7 +1499,7 @@ static inline void netif_rx_complete(struct net_device *dev,
> if (unlikely(test_bit(NAPI_STATE_NPSVC, &napi->state)))
> return;
> local_irq_save(flags);
> - __netif_rx_complete(dev, napi);
> + __netif_rx_complete(napi);
> local_irq_restore(flags);
> }
>
> --
> /****************************************************
> * Neil Horman <nhorman@tuxdriver.com>
> * Software Engineer, Red Hat
> ****************************************************/
next prev parent reply other threads:[~2008-12-18 9:04 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-12-09 21:06 [PATCH] netpoll: fix race on poll_list resulting in garbage entry Neil Horman
2008-12-10 7:22 ` David Miller
2008-12-11 13:07 ` Jarek Poplawski
2008-12-11 14:29 ` Neil Horman
2008-12-11 17:01 ` Stephen Hemminger
2008-12-11 18:15 ` Neil Horman
2008-12-12 0:03 ` Stephen Hemminger
2008-12-12 12:18 ` Neil Horman
2008-12-16 23:55 ` David Miller
2008-12-17 21:16 ` Neil Horman
2008-12-17 21:31 ` Stephen Hemminger
2008-12-17 23:44 ` Neil Horman
2008-12-18 1:13 ` Neil Horman
2008-12-18 3:29 ` David Miller
2008-12-18 14:47 ` Neil Horman
2008-12-18 19:52 ` Neil Horman
2008-12-18 22:40 ` Ben Hutchings
2008-12-18 23:30 ` Johannes Berg
2008-12-19 1:25 ` Neil Horman
2008-12-19 6:42 ` David Miller
2008-12-19 13:42 ` Neil Horman
2008-12-23 4:43 ` David Miller
2008-12-18 9:04 ` Jarek Poplawski [this message]
2008-12-12 7:07 ` Jarek Poplawski
2008-12-12 13:31 ` Neil Horman
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20081218090414.GB8416@ff.dom.local \
--to=jarkao2@gmail.com \
--cc=davem@davemloft.net \
--cc=netdev@vger.kernel.org \
--cc=nhorman@tuxdriver.com \
--cc=shemminger@vyatta.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).