netdev.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Michael Buesch <mb@bu3sch.de>
To: "Simon Barber" <simon@devicescape.com>
Cc: "Jiri Benc" <jbenc@suse.cz>,
	linville@tuxdriver.com, netdev@vger.kernel.org
Subject: Re: [PATCH] d80211: add ieee80211_stop_queues()
Date: Wed, 23 Aug 2006 21:45:05 +0200	[thread overview]
Message-ID: <200608232145.05763.mb@bu3sch.de> (raw)
In-Reply-To: <C86180A8C204554D8A3323D8F6B0A29F0165B5C6@dhost002-46.dex002.intermedia.net>

On Wednesday 23 August 2006 21:30, Simon Barber wrote:
> One question - in most hardware implementations today the queues are DMA
> rings. In this case the right length of the queue is determined by the
> interrupt/tx_softirq latency required to keep the queue from becoming
> empty. With 802.11 there are large differences in the time it takes to
> transmit different frames - a full size 1Mbit frame vs. a short 54Mbit
> frame. Would it be worth optimizing the DMA queue length to be a
> function of the amount of time rather than number of frames?

I doubt that the added complexity would do any good.
We should look at what a ring actually is.
A ring is an allocated memory space with dma descriptors in it.
The ring size if the number of dma descriptors. So theoretically
the number of descriptors adds up to the size to allocate.
In practice we have alignment issues (at least for bcm43xx).
For a ring we always allocate one page of memory. Regardless
of how much of it is actually used by descriptors.
One wireless-specific thing remains, though. We store meta
data for each descriptor (The ieee80211_tx_control at least).
So basically only the memory consumption of the meta data
could be optimized.

We used to have 512 TX descriptors for each ring until I recently
submitted a patch to lower it to 128. I did this because after
doing stress tests the maximum amount of used descriptors did
not go over about 80 on my machines.

I think it basically comes down to the question:
Do we want to save 1 kb of memory[*] but pay the price of
additional code complexity?

[*] 1kb is a random value invented by me. I did not calculate
    it, but it should be somewhere around that value.

-- 
Greetings Michael.

  reply	other threads:[~2006-08-23 19:45 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2006-08-23 11:44 [PATCH] d80211: add ieee80211_stop_queues() Michael Buesch
2006-08-23 19:30 ` Simon Barber
2006-08-23 19:45   ` Michael Buesch [this message]
2006-08-23 19:54     ` Simon Barber
2006-08-23 20:04       ` Michael Buesch
2006-08-23 20:10         ` Simon Barber
2006-08-23 20:20           ` Michael Buesch
2006-08-23 20:32             ` Simon Barber
2006-08-23 20:57               ` Michael Buesch
2006-08-23 21:12                 ` Simon Barber
2006-08-23 22:07                   ` Michael Buesch

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=200608232145.05763.mb@bu3sch.de \
    --to=mb@bu3sch.de \
    --cc=jbenc@suse.cz \
    --cc=linville@tuxdriver.com \
    --cc=netdev@vger.kernel.org \
    --cc=simon@devicescape.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).