qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Kevin Wolf <kwolf@redhat.com>
To: Paolo Bonzini <pbonzini@redhat.com>
Cc: Alexander Graf <agraf@suse.de>,
	Stefan Hajnoczi <stefanha@redhat.com>,
	qemu-devel@nongnu.org
Subject: Re: [Qemu-devel] [PATCH] AIO: Reduce number of threads for 32bit hosts
Date: Wed, 14 Jan 2015 14:38:05 +0100	[thread overview]
Message-ID: <20150114133805.GG5136@noname.redhat.com> (raw)
In-Reply-To: <54B65085.70007@redhat.com>

Am 14.01.2015 um 12:18 hat Paolo Bonzini geschrieben:
> 
> 
> On 14/01/2015 11:20, Kevin Wolf wrote:
> >> > The same problem applies to coroutine stacks, and those cannot be
> >> > throttled down as easily.  But I guess if you limit the number of
> >> > threads, the guest gets slowed down and doesn't create as many coroutines.
> > Shouldn't we rather try and decrease the stack sizes a bit? 1 MB per
> > coroutine is really a lot, and as I understand it, threads take even
> > more by default.
> 
> Yup, 2 MB.  Last time I proposed this, I think Markus was strongly in 
> the "better safe than sorry" camp. :)
> 
> But thread pool workers definitely don't need a big stack.

Right, I think we need to consider what kind of thread it is. For the
moment, I'm talking about the block layer only.

> >> > It would be nice to have a way to measure coroutine stack usage, similar
> >> > to what the kernel does.
> > The information which pages are mapped should be there somewhere...
> 
> Yes, there is mincore(2).  The complicated part is doing it fast, but
> perhaps it doesn't need to be fast.

Well, what do you want to use it for? I thought it would only be for a
one-time check where we usually end up rather than something that would
be enabled in production, but maybe I misunderstood.

> I tried gathering warning from GCC's -Wstack-usage=1023 option and the
> block layer does not seem to have functions with huge stacks in the I/O
> path.
> 
> So, assuming a maximum stack depth of 50 (already pretty generous since
> there shouldn't be any recursive calls) a 100K stack should be pretty
> much okay for coroutines and thread-pool threads.

The potential problem in the block layer is long backing file chains.
Perhaps we need to do something to solve that iteratively instead of
recursively.

> That said there are some offenders in the device models.  Other
> QemuThreads, especially VCPU threads, had better stay with a big stack.

Yes, that's not exactly surprising.

Kevin

  reply	other threads:[~2015-01-14 13:38 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2015-01-14  0:56 [Qemu-devel] [PATCH] AIO: Reduce number of threads for 32bit hosts Alexander Graf
2015-01-14  7:37 ` Paolo Bonzini
2015-01-14 10:20   ` Kevin Wolf
2015-01-14 11:18     ` Paolo Bonzini
2015-01-14 13:38       ` Kevin Wolf [this message]
2015-01-14 13:49         ` Paolo Bonzini
2015-01-14 14:07           ` Kevin Wolf
2015-01-14 14:09             ` Alexander Graf
2015-01-15 10:00               ` Kevin Wolf
2015-01-14 14:24       ` Markus Armbruster
2015-02-12 15:38 ` Stefan Hajnoczi
2015-02-12 15:59   ` Kevin Wolf

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20150114133805.GG5136@noname.redhat.com \
    --to=kwolf@redhat.com \
    --cc=agraf@suse.de \
    --cc=pbonzini@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=stefanha@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).