From: Ingo Molnar <mingo@elte.hu>
To: David Miller <davem@davemloft.net>
Cc: johnpol@2ka.mipt.ru, nickpiggin@yahoo.com.au, wenji@fnal.gov,
akpm@osdl.org, netdev@vger.kernel.org,
linux-kernel@vger.kernel.org
Subject: Re: [patch 1/4] - Potential performance bottleneck for Linxu TCP
Date: Thu, 30 Nov 2006 21:49:08 +0100 [thread overview]
Message-ID: <20061130204908.GA19393@elte.hu> (raw)
In-Reply-To: <20061130.123853.10298783.davem@davemloft.net>
* David Miller <davem@davemloft.net> wrote:
> > disk I/O is typically not CPU bound, and i believe these TCP tests
> > /are/ CPU-bound. Otherwise there would be no expiry of the timeslice
> > to begin with and the TCP receiver task would always be boosted to
> > 'interactive' status by the scheduler and would happily chug along
> > at 500 mbits ...
>
> It's about the prioritization of the work.
>
> If all disk I/O were shut off and frozen while we copy file data into
> userspace, you'd see the same problem for disk I/O.
well, it's an issue of how much processing is done in non-prioritized
contexts. TCP is a bit more sensitive to process context being throttled
- but disk I/O is not immune either: if nothing submits new IO, or if
the task does shorts reads+writes then any process level throttling
immediately shows up in IO throughput.
but in the general sense it is /unfair/ that certain processing such as
disk and network IO can get a disproportionate amount of CPU time from
the system - just because they happen to have some of their processing
in IRQ and softirq context (which is essentially prioritized to
SCHED_FIFO 100). A system can easily spend 80% CPU time in softirq
context. (and that is easily visible in something like an -rt kernel
where various softirq contexts are separate threads and you can see 30%
net-rx and 20% net-tx CPU utilization in 'top'). How is this kind of
processing different from purely process-context based subsystems?
so i agree with you that by tweaking the TCP stack to be less sensitive
to process throttling you /will/ improve the relative performance of the
TCP receiver task - but in general system design and scheduler design
terms it's not a win.
i'd also agree with the notion that the current 'throttling' of process
contexts can be abrupt and uncooperative, and hence the TCP stack could
get more out of the same amount of CPU time if it used it in a smarter
way. As i pointed it out in the first mail i'd support the TCP stack
getting the ability to query how much timeslices it has - or even the
scheduler notifying the TCP stack via some downcall if
current->timeslice reaches 1 (or something like that).
So i dont support the scheme proposed here, the blatant bending of the
priority scale towards the TCP workload. Instead what i'd like to see is
more TCP performance (and a nicer over-the-wire behavior - no
retransmits for example) /with the same 10% CPU time used/. Are we in
rough agreement?
Ingo
next prev parent reply other threads:[~2006-11-30 20:49 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2006-11-30 1:56 [patch 1/4] - Potential performance bottleneck for Linxu TCP Wenji Wu
2006-11-30 2:19 ` David Miller
2006-11-30 6:17 ` Ingo Molnar
2006-11-30 6:30 ` David Miller
2006-11-30 6:47 ` Ingo Molnar
2006-11-30 7:12 ` David Miller
2006-11-30 7:35 ` Ingo Molnar
2006-11-30 9:52 ` Evgeniy Polyakov
2006-11-30 10:07 ` Nick Piggin
2006-11-30 10:22 ` Evgeniy Polyakov
2006-11-30 10:32 ` Ingo Molnar
2006-11-30 17:04 ` Wenji Wu
2006-11-30 20:20 ` Ingo Molnar
2006-11-30 20:58 ` Wenji Wu
2006-11-30 20:22 ` David Miller
2006-11-30 20:30 ` Ingo Molnar
2006-11-30 20:38 ` David Miller
2006-11-30 20:49 ` Ingo Molnar [this message]
2006-11-30 20:54 ` Ingo Molnar
2006-11-30 20:55 ` David Miller
2006-11-30 20:14 ` David Miller
2006-11-30 20:42 ` Wenji Wu
2006-12-01 9:53 ` Evgeniy Polyakov
2006-12-01 23:18 ` David Miller
2006-11-30 6:56 ` Ingo Molnar
2006-11-30 16:08 ` Wenji Wu
2006-11-30 20:06 ` David Miller
2006-11-30 9:33 ` Christoph Hellwig
2006-11-30 16:51 ` Lee Revell
-- strict thread matches above, loose matches on Subject: below --
2006-11-30 2:02 Wenji Wu
2006-11-30 6:19 ` Ingo Molnar
2006-11-29 23:27 [Changelog] " Wenji Wu
2006-11-29 23:28 ` [patch 1/4] " Wenji Wu
2006-11-30 0:53 ` David Miller
2006-11-30 1:08 ` Andrew Morton
2006-11-30 1:13 ` David Miller
2006-11-30 6:04 ` Mike Galbraith
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20061130204908.GA19393@elte.hu \
--to=mingo@elte.hu \
--cc=akpm@osdl.org \
--cc=davem@davemloft.net \
--cc=johnpol@2ka.mipt.ru \
--cc=linux-kernel@vger.kernel.org \
--cc=netdev@vger.kernel.org \
--cc=nickpiggin@yahoo.com.au \
--cc=wenji@fnal.gov \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).