public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Anthony Liguori <aliguori@us.ibm.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: kvm-devel@lists.sourceforge.net, Avi Kivity <avi@qumranet.com>
Subject: Re: [PATCH][RFC] Use pipe() to simulate signalfd()
Date: Tue, 29 Apr 2008 19:47:54 -0500	[thread overview]
Message-ID: <4817C1BA.70107@us.ibm.com> (raw)
In-Reply-To: <20080430003801.GB18682@dmt>

Marcelo Tosatti wrote:
>>> Moving the signal handling + pipe write to a separate thread should get
>>> rid of it.
>>>  
>>>       
>> Yeah, but then you just introduce buffering problems since if you're 
>> getting that many signals, the pipe will get full.
>>     
>
> It is OK to lose signals if you have at least one queued in the pipe.
>   

If you're getting so many signals that you can't make forward progress 
on any system call, you're application is not going to function 
anymore.  A use of signals in this manner is broken by design.

>> No point in designing for something that isn't likely to happen in practice.
>>     
>
> You should not design something making the assumption that this scenario
> won't happen.
>
> For example this could happen in high throughput guests using POSIX AIO, 
> actually pretty likely to happen if data is cached in hosts pagecache.
>   

We really just need to move away from signals as best as we can.  I've 
got a patch started that implements a thread-pool based AIO mechanism 
for QEMU.  Notifications are done over a pipe so we don't have to deal 
with the unreliability of signals.

I can't imagine a guest trying to do so much IO though that this would 
really ever happen.  POSIX AIO can only have one outstanding request 
per-fd.  To complete the IO request, you would have to eventually go 
back to the guest and during that time, the IO thread is going to be 
able to make forward progress.  You won't get a signal again until a new 
IO request is submitted.

Regards,

Anthony Liguori

> Its somewhat similar to what happens with NAPI and interrupt mitigation.
>
>   


-------------------------------------------------------------------------
This SF.net email is sponsored by the 2008 JavaOne(SM) Conference 
Don't miss this year's exciting event. There's still time to save $100. 
Use priority code J8TL2D2. 
http://ad.doubleclick.net/clk;198757673;13503038;p?http://java.sun.com/javaone

  reply	other threads:[~2008-04-30  0:47 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2008-04-29 14:28 [PATCH][RFC] Use pipe() to simulate signalfd() Anthony Liguori
2008-04-29 22:37 ` Marcelo Tosatti
2008-04-29 22:42   ` Anthony Liguori
2008-04-29 23:13     ` Marcelo Tosatti
2008-04-29 23:15       ` Anthony Liguori
2008-04-29 23:37         ` Marcelo Tosatti
2008-04-29 23:44           ` Anthony Liguori
2008-04-30  0:08             ` Marcelo Tosatti
2008-04-30  0:22               ` Anthony Liguori
2008-04-30  0:38                 ` Marcelo Tosatti
2008-04-30  0:47                   ` Anthony Liguori [this message]
2008-04-30  2:16                     ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4817C1BA.70107@us.ibm.com \
    --to=aliguori@us.ibm.com \
    --cc=avi@qumranet.com \
    --cc=kvm-devel@lists.sourceforge.net \
    --cc=mtosatti@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox