xen-devel.lists.xenproject.org archive mirror
 help / color / mirror / Atom feed
From: Paolo Bonzini <pbonzini@redhat.com>
To: Paul Durrant <Paul.Durrant@citrix.com>,
	Martin Cerveny <M.Cerveny@computer.org>
Cc: "xen-devel@lists.xensource.com" <xen-devel@lists.xensource.com>,
	George Dunlap <George.Dunlap@citrix.com>
Subject: Re: Overlaped PIO with multiple ioreq_server (Xen4.6.1)
Date: Mon, 9 May 2016 18:18:25 +0200	[thread overview]
Message-ID: <5730B851.3000109@redhat.com> (raw)
In-Reply-To: <0c7ffa680c8b41898771010239e16111@AMSPEX02CL03.citrite.net>



On 09/05/2016 18:14, Paul Durrant wrote:
>> -----Original Message-----
>> From: Paul Durrant
>> Sent: 09 May 2016 17:02
>> To: Paul Durrant; Paolo Bonzini; Martin Cerveny
>> Cc: xen-devel@lists.xensource.com; George Dunlap
>> Subject: RE: [Xen-devel] Overlaped PIO with multiple ioreq_server
>> (Xen4.6.1)
>>
>>> -----Original Message-----
>>> From: Xen-devel [mailto:xen-devel-bounces@lists.xen.org] On Behalf Of
>>> Paul Durrant
>>> Sent: 09 May 2016 14:00
>>> To: Paolo Bonzini; Martin Cerveny
>>> Cc: xen-devel@lists.xensource.com; George Dunlap
>>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
>>> (Xen4.6.1)
>>>
>>>> -----Original Message-----
>>>> From: Paolo Bonzini [mailto:pbonzini@redhat.com]
>>>> Sent: 09 May 2016 13:56
>>>> To: Paul Durrant; Martin Cerveny
>>>> Cc: George Dunlap; xen-devel@lists.xensource.com
>>>> Subject: Re: [Xen-devel] Overlaped PIO with multiple ioreq_server
>>>> (Xen4.6.1)
>>>>
>>>>
>>>>
>>>> On 28/04/2016 13:25, Paul Durrant wrote:
>>>>>> Maybe you are lucky, qemu is registered before your own demu
>>>>>> emulator.
>>>>>
>>>>> I guess I was lucky.
>>>>
>>>> Yeah, QEMU has been doing that since 2013 (commit 3bb28b7, "memory:
>>>> Provide separate handling of unassigned io ports accesses", 2013-09-05).
>>>>
>>>>>> I used for testing your "demu" 2 years ago, now extending Citrix
>>>>>> "vgpu", all was fine up to xen 4.5.2 (with qemu 2.0.2) but
>>>>>> problem begin when I switched to 4.6.1 (with qemu 2.2.1), but it
>>>>>> maybe lucky timing in registration.
>>>>>
>>>>> I think Xen should really be spotting range overlaps like this, but
>>>>> the QEMU<->Xen interface will clearly need to be fixed to avoid the
>>>>> over-claiming of I/O ports like this.
>>>>
>>>> If the handling of unassigned I/O ports is sane in Xen (in QEMU they
>>>> return all ones and discard writes),
>>>
>>> Yes, it does exactly that.
>>>
>>>> it would be okay to make the
>>>> background 0-65535 range conditional on !xen_enabled().  See
>>>> memory_map_init() in QEMU's exec.c file.
>>>>
>>>
>>> Cool. Thanks for the tip. Will have a look at that now.
>>>
>>
>> Looks like creation of the background range is required. (Well, when I simply
>> #if 0-ed out creating it QEMU crashed on invocation). So, I guess I need to be
>> able to spot, from the memory listener callback in Xen, when a background
>> range is being added so it can be ignored. Same actually goes for memory as
>> well as I/O, since Xen will handle access to unimplemented MMIO ranges in a
>> similar fashion.
>>
> 
> In fact, this patch seems to do the trick for I/O:
> 
> diff --git a/xen-hvm.c b/xen-hvm.c
> index 039680a..8ab44f0 100644
> --- a/xen-hvm.c
> +++ b/xen-hvm.c
> @@ -510,8 +510,12 @@ static void xen_io_add(MemoryListener *listener,
>                         MemoryRegionSection *section)
>  {
>      XenIOState *state = container_of(listener, XenIOState, io_listener);
> +    MemoryRegion *mr = section->mr;
> 
> -    memory_region_ref(section->mr);
> +    if (mr->ops == &unassigned_io_ops)
> +        return;
> +
> +    memory_region_ref(mr);
> 
>      xen_map_io_section(xen_xc, xen_domid, state->ioservid, section);
>  }
> @@ -520,10 +524,14 @@ static void xen_io_del(MemoryListener *listener,
>                         MemoryRegionSection *section)
>  {
>      XenIOState *state = container_of(listener, XenIOState, io_listener);
> +    MemoryRegion *mr = section->mr;
> +
> +    if (mr->ops == &unassigned_io_ops)
> +        return;
> 
>      xen_unmap_io_section(xen_xc, xen_domid, state->ioservid, section);
> 
> -    memory_region_unref(section->mr);
> +    memory_region_unref(mr);
>  }

Looks good, feel free to Cc me if you send it to qemu-devel (though I'll
let Anthony merge it).

Paolo

_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel

  reply	other threads:[~2016-05-09 16:18 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-27 19:38 Overlaped PIO with multiple ioreq_server (Xen4.6.1) Martin Cerveny
2016-04-28  8:50 ` George Dunlap
2016-04-28  9:46   ` Paul Durrant
2016-04-28 11:16     ` Martin Cerveny
2016-04-28 11:25       ` Paul Durrant
2016-05-09 12:55         ` Paolo Bonzini
2016-05-09 12:59           ` Paul Durrant
2016-05-09 16:02             ` Paul Durrant
2016-05-09 16:14               ` Paul Durrant
2016-05-09 16:18                 ` Paolo Bonzini [this message]
2016-05-09 16:19                   ` Paul Durrant

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=5730B851.3000109@redhat.com \
    --to=pbonzini@redhat.com \
    --cc=George.Dunlap@citrix.com \
    --cc=M.Cerveny@computer.org \
    --cc=Paul.Durrant@citrix.com \
    --cc=xen-devel@lists.xensource.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).