public inbox for linux-kernel@vger.kernel.org
 help / color / mirror / Atom feed
From: Roland Stigge <stigge@antcom.de>
To: Ben Hutchings <bhutchings@solarflare.com>
Cc: davem@davemloft.net, jeffrey.t.kirsher@intel.com,
	alexander.h.duyck@intel.com, eilong@broadcom.com,
	ian.campbell@citrix.com, netdev@vger.kernel.org,
	w.sang@pengutronix.de, linux-kernel@vger.kernel.org,
	kevin.wells@nxp.com, linux-arm-kernel@lists.infradead.org,
	arnd@arndb.de, baruch@tkos.co.il, joe@perches.com
Subject: Re: [PATCH v4] lpc32xx: Added ethernet driver
Date: Tue, 06 Mar 2012 09:53:45 +0100	[thread overview]
Message-ID: <4F55D099.5070006@antcom.de> (raw)
In-Reply-To: <1330987524.2538.61.camel@bwh-desktop>

Hi Ben,

thank you for your review!

On 03/05/2012 11:45 PM, Ben Hutchings wrote:
> [...]
>> +static int lpc_eth_poll(struct napi_struct *napi, int budget)
>> +{
>> +	struct netdata_local *pldat = container_of(napi,
>> +			struct netdata_local, napi);
>> +	struct net_device *ndev = pldat->ndev;
>> +	unsigned long flags;
>> +	int rx_done = 0;
>> +
>> +	spin_lock_irqsave(&pldat->lock, flags);
>> +
>> +	__lpc_handle_xmit(ndev);
>> +	rx_done = __lpc_handle_recv(ndev, budget);
>> +
>> +	if (rx_done < budget) {
>> +		napi_complete(napi);
>> +		lpc_eth_enable_int(pldat->net_base);
>> +	}
>> +
>> +	spin_unlock_irqrestore(&pldat->lock, flags);
>
> This is really sad.  You implement NAPI but then take away most of the
> benefits of that by disabling interrupts.
>
> It looks like you could safely unlock pldat->lock before calling
> __lpc_handle_recv - nothing else manipulates RX queue state so no lock
> is required.
>
> As for the TX side, you can probably use the TX queue lock
> (__netif_tx_lock, __netif_tx_unlock) to serialise with
> lpc_eth_hard_start_xmit() and avoid taking pldat->lock in either
> __lpc_handle_xmit() or here.

Sounds reasonable, and will do it.

However, I implemented it from the example of
drivers/net/ethernet/via/via-velocity.c:velocity_poll() - is there a
good reason for doing it that way in the velocity driver or is it done
incorrectly there, also?

Thanks,

Roland

  parent reply	other threads:[~2012-03-06  8:53 UTC|newest]

Thread overview: 10+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2012-03-05 21:40 [PATCH v4] lpc32xx: Added ethernet driver Roland Stigge
2012-03-05 22:45 ` Ben Hutchings
2012-03-06  0:49   ` Eric Dumazet
2012-03-06  1:26     ` Ben Hutchings
2012-03-06  8:53   ` Roland Stigge [this message]
2012-03-06 12:13     ` Eric Dumazet
2012-03-06 10:43   ` [PATCH v4] lpc32xx: Added ethernet driver: smp_wmb() Roland Stigge
2012-03-06 14:03     ` Ben Hutchings
2012-03-06 14:17       ` Russell King - ARM Linux
2012-03-06 15:37         ` Ben Hutchings

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4F55D099.5070006@antcom.de \
    --to=stigge@antcom.de \
    --cc=alexander.h.duyck@intel.com \
    --cc=arnd@arndb.de \
    --cc=baruch@tkos.co.il \
    --cc=bhutchings@solarflare.com \
    --cc=davem@davemloft.net \
    --cc=eilong@broadcom.com \
    --cc=ian.campbell@citrix.com \
    --cc=jeffrey.t.kirsher@intel.com \
    --cc=joe@perches.com \
    --cc=kevin.wells@nxp.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=netdev@vger.kernel.org \
    --cc=w.sang@pengutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox