From: Denys Fedoryshchenko <denys@visp.net.lb>
To: "Brandeburg, Jesse" <jesse.brandeburg@intel.com>
Cc: "Eric Dumazet" <dada1@cosmosbay.com>, netdev@vger.kernel.org
Subject: Re: packetloss, on e1000e worse than r8169?
Date: Wed, 18 Jun 2008 20:03:37 +0300 [thread overview]
Message-ID: <200806182003.37120.denys@visp.net.lb> (raw)
In-Reply-To: <36D9DB17C6DE9E40B059440DB8D95F52056F2823@orsmsx418.amr.corp.intel.com>
On Wednesday 18 June 2008 19:50, Brandeburg, Jesse wrote:
> Denys Fedoryshchenko wrote:
> > After trying everything, it looks like that problem in PBS size, and
> > as result PBA (rx fifo) size.
>
> agreed
>
> > On ICH8 it is small, only 16K PBS (0x10), and RX/TX set each 8k, even
> > i can set 0xd/0x3 it doesn't help (i didn't measure if it make less
>
> just to make sure, you set PBA=0xd, correct?
Yes
>
> > packetloss). As i understand, i need to set only RX, TX calculated
> > automatically. Both motherboards i tried had ICH8.
>
> your understanding is correct, the lower 8 bits represent rx fifo size,
> and the tx fifo size is computed based on the result of (PBS - (lower 8
> bits of)PBA)
>
> Please note this is mostly documented in the software developers manuals
> posted both at intel.com and e1000.sourceforge.net. ICH8 documentation
> is mostly covered in the chipset documentation.
>
> > All other servers which i mention, and which have enough big load
> > have: 1)Sun - PBA 48K (82546EB)
> > 2)DP35DP - PBA 16K (ICH9)
> >
> > Also ICH8 missing some features, that ICH9 supports, such as
> > FLAG_HAS_ERT, but it looks ERT useful only for Jumbo frames.
> > And sure ICH8 doesn't support Jumbo frames, maybe because of limited
> > PBS.
> >
> > Is PBS size hardware limitation of ICH8?
> yes, working size of the FIFO is 16kB total (PBS)
>
> > Is it possible i am right in my conclusions?
> yes, client parts will not buffer as much data (generally) due to
> smaller FIFO as the server parts, which typically have 64kB total FIFO.
>
> > Probably such details in network adapters will be useful for Vyatta
> > guys, to choose proper network adapter for their systems :-)
>
> agreed, the rule here would be don't use client parts for server class
> workloads, unfortunately we don't control what machines certain server
> vendors put client parts like 82573, ICH8/ICH9 in, so sometimes you have
> a "low end" server with a client gigabit ethernet part.
> --
> To unsubscribe from this list: send the line "unsubscribe netdev" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at http://vger.kernel.org/majordomo-info.html
>
Yes, 0xd. I am using now onboard 82546GB on old Intel Xeon 3.0 Ghz, flow-control works flawlessly.
cpu family : 15
model : 4
model name : Intel(R) Xeon(TM) CPU 3.00GHz
stepping : 1
cpu MHz : 2992.650
cache size : 1024 KB
I have packetloss, but now it is 1.516-e06 %, which is acceptable number for me.
I had to increase ring size, otherwise i was getting rx_no_buffer_count in stats.
It is still famous rx_missed_errors: 3616 .
But as i reported in personal mail, before rx_missed_errors was larger than tx_deferred_ok, now it is:
rx_missed_errors: 3633
tx_deferred_ok: 19145380
Now server handling 180Kpps RX, 800Mbps RX+TX, 4 VLAN.
latest git kernel running. I will do probably soon profiling iptables on it and some other tasks to test.
Maybe i will try also just to compare ICH9, if will have chance and way to buy it.
MegaRouterXeon-KARAM ~ # mpstat 30
Linux 2.6.26-rc6-git4-build-0029 (MegaRouterXeon-KARAM) 06/18/08
20:04:02 CPU %user %nice %sys %iowait %irq %soft %steal %idle intr/s
20:04:32 all 0.09 0.00 1.19 0.00 1.94 14.93 0.00 81.84 17775.00
20:05:02 all 0.09 0.00 1.32 0.00 1.72 14.70 0.00 82.17 17810.23
--
------
Technical Manager
Virtual ISP S.A.L.
Lebanon
next prev parent reply other threads:[~2008-06-18 17:03 UTC|newest]
Thread overview: 33+ messages / expand[flat|nested] mbox.gz Atom feed top
2008-06-16 19:37 packetloss, on e1000e worse than r8169? Denys Fedoryshchenko
2008-06-16 19:48 ` Eric Dumazet
2008-06-16 20:20 ` Denys Fedoryshchenko
2008-06-16 20:23 ` Denys Fedoryshchenko
2008-06-16 20:36 ` Kok, Auke
2008-06-16 20:52 ` Denys Fedoryshchenko
[not found] ` <20080616204411.M52834@visp.net.lb>
2008-06-16 20:49 ` Kok, Auke
2008-06-16 20:59 ` Denys Fedoryshchenko
2008-06-16 21:13 ` Denys Fedoryshchenko
2008-06-16 21:29 ` Eric Dumazet
2008-06-16 21:41 ` Denys Fedoryshchenko
2008-06-16 21:41 ` Denys Fedoryshchenko
[not found] ` <20080616213419.M212@visp.net.lb>
2008-06-16 21:41 ` Eric Dumazet
2008-06-16 21:44 ` Eric Dumazet
2008-06-16 22:08 ` Denys Fedoryshchenko
2008-06-17 2:17 ` Eric Dumazet
2008-06-17 2:47 ` Denys Fedoryshchenko
2008-06-16 22:08 ` Denys Fedoryshchenko
2008-06-16 23:24 ` Brandeburg, Jesse
2008-06-17 1:53 ` Denys Fedoryshchenko
2008-06-17 6:46 ` Denys Fedoryshchenko
2008-06-17 17:22 ` Denys Fedoryshchenko
2008-06-18 0:58 ` Brandeburg, Jesse
2008-06-17 21:04 ` Denys Fedoryshchenko
2008-06-18 16:50 ` Brandeburg, Jesse
2008-06-18 17:03 ` Denys Fedoryshchenko [this message]
2008-06-16 21:17 ` Denys Fedoryshchenko
2008-06-16 21:05 ` Eric Dumazet
2008-06-16 20:15 ` Auke Kok
2008-06-16 20:29 ` Denys Fedoryshchenko
2008-06-16 20:26 ` Francois Romieu
2008-06-16 20:37 ` Waskiewicz Jr, Peter P
2008-06-16 20:42 ` Denys Fedoryshchenko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=200806182003.37120.denys@visp.net.lb \
--to=denys@visp.net.lb \
--cc=dada1@cosmosbay.com \
--cc=jesse.brandeburg@intel.com \
--cc=netdev@vger.kernel.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).