linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Bernie Innocenti <bernie@codewiz.org>
To: Krzysztof Halasa <khc@pm.waw.pl>
Cc: Ward Vandewege <ward@gnu.org>,
	lkml <linux-kernel@vger.kernel.org>,
	Jan Seiffert <kaffeemonster@googlemail.com>
Subject: Re: pc300too on a modern kernel?
Date: Mon, 22 Nov 2010 11:17:55 -0500	[thread overview]
Message-ID: <1290442675.5515.92.camel@giskard.codewiz.org> (raw)
In-Reply-To: <m3tyjd7za9.fsf@intrepid.localdomain>

On Fri, 2010-11-19 at 22:56 +0100, Krzysztof Halasa wrote:

> It seems it happens this way:
> - sca_xmit() fills the whole ring (leaving one descriptor empty as
>   designed - for EDA to work)
> - the chip transmits something and signals IRQ->sca_tx_done()
> - sca_tx_done can't see any descriptor processed and only wakes the
>   queue. Perhaps we should only wake the queue if at least one
>   descriptor has been processed - though sca_tx_done() should never be
>   called otherwise.
> - sca_xmit is called again with full ring, thus BUG().
> 
> I wonder if the following helps (untested):
> 
> --- a/drivers/net/wan/hd64572.c
> +++ b/drivers/net/wan/hd64572.c
> @@ -293,6 +293,7 @@ static inline void sca_tx_done(port_t *port)
>  	struct net_device *dev = port->netdev;
>  	card_t* card = port->card;
>  	u8 stat;
> +	int wake = 0;
>  
>  	spin_lock(&port->lock);
>  
> @@ -316,10 +317,12 @@ static inline void sca_tx_done(port_t *port)
>  			dev->stats.tx_bytes += readw(&desc->len);
>  		}
>  		writeb(0, &desc->stat);	/* Free descriptor */
> +		wake = 1;
>  		port->txlast = (port->txlast + 1) % card->tx_ring_buffers;
>  	}
>  
> -	netif_wake_queue(dev);
> +	if (wake)
> +		netif_wake_queue(dev);
>  	spin_unlock(&port->lock);
>  }

Last Friday I applied a patch very similar to this one, with a printk on
the no-wake case.

As you predicted, this made the BUG_ON() disappear. My printk fired
approximately at same frequency of the debug statements I had in
sca_xmit(), thus confirming your hypothesis.

Now the question is: why do we get so many spurious interrupts?

With this workaround applied, we're st seeing occasional clusters of
packet loss. We're working to graph the ping loss alongside traffic to
see if there's any correlation.

-- 
   // Bernie Innocenti - http://codewiz.org/
 \X/  Sugar Labs       - http://sugarlabs.org/


  reply	other threads:[~2010-11-22 16:18 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <20100902131531.GA19028@countzero.vandewege.net>
     [not found] ` <m3mxrw1lg0.fsf@intrepid.localdomain>
     [not found]   ` <1289421869.9336.49.camel@giskard.codewiz.org>
     [not found]     ` <m3sjz34mka.fsf@intrepid.localdomain>
2010-11-16 21:56       ` pc300too on a modern kernel? Bernie Innocenti
2010-11-19 21:56         ` Krzysztof Halasa
2010-11-22 16:17           ` Bernie Innocenti [this message]
2010-11-22 21:20             ` Krzysztof Halasa
2010-11-23 14:44               ` Ward Vandewege

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1290442675.5515.92.camel@giskard.codewiz.org \
    --to=bernie@codewiz.org \
    --cc=kaffeemonster@googlemail.com \
    --cc=khc@pm.waw.pl \
    --cc=linux-kernel@vger.kernel.org \
    --cc=ward@gnu.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).