From: David Miller <davem@davemloft.net>
To: mcarlson@broadcom.com
Cc: netdev@vger.kernel.org, andy@greyhouse.net, mchan@broadcom.com
Subject: Re: [PATCH 10/13] tg3: Increase the PCI MRRS
Date: Thu, 15 Nov 2007 14:41:03 -0800 (PST) [thread overview]
Message-ID: <20071115.144103.24280158.davem@davemloft.net> (raw)
In-Reply-To: <1195165211.20497.107.camel@teletran1>
From: "Matt Carlson" <mcarlson@broadcom.com>
Date: Thu, 15 Nov 2007 14:20:10 -0800
> On Mon, 2007-11-12 at 21:21 -0800, David Miller wrote:
> > From: "Matt Carlson" <mcarlson@broadcom.com>
> > Date: Fri, 09 Nov 2007 16:39:01 -0800
> >
> > > Previous devices hardcoded the PCI Maximum Read Request Size to 4K. To
> > > better comply with the PCI spec, the hardware now defaults the MRRS to
> > > 512 bytes. This will yield poor driver performance if left untouched.
> > > This patch increases the MRRS to 4K on driver initialization.
> > >
> > > Signed-off-by: Matt Carlson <mcarlson@broadcom.com>
> > > Signed-off-by: Michael Chan <mchan@broadcom.com>
> >
> > I've applied this patch, but...
> >
> > I sense that the PCI spec wants devices to use an MRRS value of 512 in
> > order to get better fairness on a PCI-E segment amongst multiple
> > devices.
> >
> > From that perspective, jacking up the MRRS to 4096 unilaterally seems
> > like a very bad idea. If this was necessary for good performance, I'm
> > sure the PCI spec folks would have choosen a higher value.
> >
> > Or is this some tg3 specific performance issue?
>
> Keeping the MRRS at 512 introduces DMA latencies that effectively
> prevent us from achieving linerate. With a packet size of ~1.5K and the
> MRRS at 512 bytes, the DMA will be broken into at least 3 DMA reads.
> Each DMA read takes ~1usec to initiate. It is this overhead that starts
> to cut into total throughput.
Ok, but wouldn't every networking device on PCI need to do this then?
I want to hear what you think about this wrt. what I mentioned about
fairness above. What's the point of PCI specifying a limit to comply
with if nobody complies with the limit for localized performance
reasons?
I think this is an important issue. Someone down the road is going to
see bad disk throughput when doing lots of network transfers and
wonder why that is. It will be hard to debug, but it won't be
difficult for us to do something proactive about this right now to
prevent that problem from happening in the first place.
Thanks.
next prev parent reply other threads:[~2007-11-15 22:41 UTC|newest]
Thread overview: 8+ messages / expand[flat|nested] mbox.gz Atom feed top
2007-11-10 0:39 [PATCH 10/13] tg3: Increase the PCI MRRS Matt Carlson
2007-11-13 5:21 ` David Miller
2007-11-15 22:20 ` Matt Carlson
2007-11-15 22:41 ` David Miller [this message]
2007-11-15 23:51 ` Michael Chan
2007-11-15 23:08 ` David Miller
2007-11-16 0:32 ` Rick Jones
2007-11-16 2:17 ` Michael Chan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20071115.144103.24280158.davem@davemloft.net \
--to=davem@davemloft.net \
--cc=andy@greyhouse.net \
--cc=mcarlson@broadcom.com \
--cc=mchan@broadcom.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).