public inbox for kvm@vger.kernel.org
 help / color / mirror / Atom feed
From: Avi Kivity <avi@redhat.com>
To: Marcelo Tosatti <mtosatti@redhat.com>
Cc: kvm@vger.kernel.org, quintela@redhat.com
Subject: Re: [patch 1/3] KVM: x86: add ioctls to get/set PIO state
Date: Tue, 09 Feb 2010 08:38:56 +0200	[thread overview]
Message-ID: <4B710300.7090903@redhat.com> (raw)
In-Reply-To: <20100208224119.GA6516@amt.cnet>

On 02/09/2010 12:41 AM, Marcelo Tosatti wrote:
> On Thu, Feb 04, 2010 at 11:46:25PM +0200, Avi Kivity wrote:
>    
>> On 02/04/2010 11:36 PM, Marcelo Tosatti wrote:
>>      
>>> On Thu, Feb 04, 2010 at 09:16:47PM +0200, Avi Kivity wrote:
>>>        
>>>> On 01/28/2010 09:03 PM, Marcelo Tosatti wrote:
>>>>          
>>>>> A vcpu can be stopped after handling IO in userspace,
>>>>> but before returning to kernel to finish processing.
>>>>>
>>>>>            
>>>> Is this strictly needed?  If we teach qemu to migrate before
>>>> executing the pio request, I think we'll be all right?  should work
>>>> at least for IN/INS, not sure about OUT/OUTS.
>>>>          
>>> It would be nice (instead of more state to keep track of between
>>> kernel/user) but the drawbacks i see are:
>>>
>>> You'd have to add a limitation so that any IN which was processed
>>> by device emulation has to re-entry kernel to complete it (so it
>>> complicates vcpu stop in userspace).
>>>
>>>        
>> You could fix that by moving the IN emulation to before guest entry.
>> It complicates the vcpu loop a bit, but is backwards compatible and
>> all that.
>>      
> Under such scheme, to avoid a stream of IN's from temporarily blocking
> vcpu stop capability, you'd have to requeue a signal to stop the vcpu
> (so the next IN in the stream is not executed, but complete_pio does).
>
> Or not process the stop signal in the first place (new state for main
> loop, "pending pio/mmio").
>    

Why?  you would handle stops exactly the same way:

vcpu_loop:
      while running:
          process_last_in()
          run_vcpu()
          handle_exit_except_in()

An IN that is stopped would simply be unprocessed, and the next entry, 
if at a new host, will simply re-execute it.

> Or even just copy the result from QEMU device to RAX in userspace, which
> is somewhat nasty since you'd have either userspace or kernel finishing
> the op.
>    

Definitely bad.

> For REP OUTS larger than page size, the current position is held in RCX,
> but complete_pio uses vcpu->arch.pio.cur_count and count to hold the
> position. So you either make it possible to writeback vcpu->arch.pio
> to the kernel, or wait for the operation to finish (with similar
> complications regarding signal processing).
>    

RCX is always consistent, no?  So if we migrate in the middle of REP 
OUTS, the operation will restart at the correct place?

> As i see it, the benefit of backward compatibility is not worthwhile
> compared to the complications introduced to vcpu loop processing (and
> potential for damaging vcpu stop ->  vcpu stopped latency).
>
> Are you certain its worth avoiding the restore ioctl for pio/mmio?
>    

First, let's see if it's feasible or not.  If it's feasible, it's 
probably just a matter of rearranging things to get userspace sane.  A 
small price to pay for backward compatibility.


-- 
I have a truly marvellous patch that fixes the bug which this
signature is too narrow to contain.


  reply	other threads:[~2010-02-09  6:39 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2010-01-28 19:03 [patch 0/3] save/restore in-progress PIO Marcelo Tosatti
2010-01-28 19:03 ` [patch 1/3] KVM: x86: add ioctls to get/set PIO state Marcelo Tosatti
2010-02-04 19:16   ` Avi Kivity
2010-02-04 21:36     ` Marcelo Tosatti
2010-02-04 21:46       ` Avi Kivity
2010-02-04 22:12         ` Marcelo Tosatti
2010-02-04 22:45           ` Marcelo Tosatti
2010-02-08 22:41         ` Marcelo Tosatti
2010-02-09  6:38           ` Avi Kivity [this message]
2010-02-09 18:23             ` Marcelo Tosatti
2010-02-09 20:58             ` qemu-kvm: do not allow vcpu stop with in progress PIO Marcelo Tosatti
2010-02-10  7:02               ` Avi Kivity
2010-02-10 16:25                 ` Marcelo Tosatti
2010-02-10 16:40                   ` Avi Kivity
2010-02-10 16:52                     ` Alexander Graf
2010-02-10 17:01                       ` Avi Kivity
2010-02-10 17:03                         ` Alexander Graf
2010-02-10 17:08                           ` Avi Kivity
2010-02-10 17:07                         ` Gleb Natapov
2010-02-10 17:08                           ` Avi Kivity
2010-02-13 18:10                     ` KVM: add doc note about PIO/MMIO completion API Marcelo Tosatti
2010-02-14  8:17                       ` Avi Kivity
2010-02-17  9:22                         ` Avi Kivity
2010-02-18 13:24               ` qemu-kvm: do not allow vcpu stop with in progress PIO Avi Kivity
2010-01-28 19:03 ` [patch 2/3] uqmaster: save/restore pio state Marcelo Tosatti
2010-01-28 19:03 ` [patch 3/3] uqmaster: save/restore PIO page Marcelo Tosatti
2010-01-28 20:24   ` Anthony Liguori
2010-01-28 21:10     ` Marcelo Tosatti

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=4B710300.7090903@redhat.com \
    --to=avi@redhat.com \
    --cc=kvm@vger.kernel.org \
    --cc=mtosatti@redhat.com \
    --cc=quintela@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox