From mboxrd@z Thu Jan 1 00:00:00 1970 From: "Jan Beulich" Subject: Re: [PATCH v2 for 4.5] ioreq-server: handle the lack of a default emulator properly Date: Mon, 29 Sep 2014 14:03:41 +0100 Message-ID: <542974CD020000780003A831@mail.emea.novell.com> References: <1411986115-5147-1-git-send-email-paul.durrant@citrix.com> <54293B8B.2080507@citrix.com> <54296AB3020000780003A7B8@mail.emea.novell.com> <542955BA.5070109@citrix.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <542955BA.5070109@citrix.com> Content-Disposition: inline List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Andrew Cooper Cc: Paul Durrant , Keir Fraser , xen-devel@lists.xen.org List-Id: xen-devel@lists.xenproject.org >>> On 29.09.14 at 14:51, wrote: > On 29/09/14 13:20, Jan Beulich wrote: >>>>> On 29.09.14 at 12:59, wrote: >>> On 29/09/14 11:21, Paul Durrant wrote: >>>> I started porting QEMU over to use the new ioreq server API and hit a >>>> problem with PCI bus enumeration. Because, with my patches, QEMU only >>>> registers to handle config space accesses for the PCI device it implements >>>> all other attempts by the guest to access 0xcfc go nowhere and this was >>>> causing the vcpu to wedge up because nothing was completing the I/O. >>>> >>>> This patch introduces an I/O completion handler into the hypervisor for the >>>> case where no ioreq server matches a particular request. Read requests are >>>> completed with 0xf's in the data buffer, writes and all other I/O req types >>>> are ignored. >>>> >>>> Signed-off-by: Paul Durrant >>>> Cc: Keir Fraser >>>> Cc: Jan Beulich >>>> --- >>>> v2: - First non-RFC submission >>>> - Removed warning on unemulated MMIO accesses >>>> >>>> xen/arch/x86/hvm/hvm.c | 35 ++++++++++++++++++++++++++++++++--- >>>> 1 file changed, 32 insertions(+), 3 deletions(-) >>>> >>>> diff --git a/xen/arch/x86/hvm/hvm.c b/xen/arch/x86/hvm/hvm.c >>>> index 5c7e0a4..822ac37 100644 >>>> --- a/xen/arch/x86/hvm/hvm.c >>>> +++ b/xen/arch/x86/hvm/hvm.c >>>> @@ -2386,8 +2386,7 @@ static struct hvm_ioreq_server >>> *hvm_select_ioreq_server(struct domain *d, >>>> if ( list_empty(&d->arch.hvm_domain.ioreq_server.list) ) >>>> return NULL; >>>> >>>> - if ( list_is_singular(&d->arch.hvm_domain.ioreq_server.list) || >>>> - (p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO) ) >>>> + if ( p->type != IOREQ_TYPE_COPY && p->type != IOREQ_TYPE_PIO ) >>>> return d->arch.hvm_domain.default_ioreq_server; >>>> >>>> cf8 = d->arch.hvm_domain.pci_cf8; >>>> @@ -2618,12 +2617,42 @@ bool_t hvm_send_assist_req_to_ioreq_server(struct >>> hvm_ioreq_server *s, >>>> return 0; >>>> } >>>> >>>> +static bool_t hvm_complete_assist_req(ioreq_t *p) >>>> +{ >>>> + switch (p->type) >>>> + { >>>> + case IOREQ_TYPE_COPY: >>>> + case IOREQ_TYPE_PIO: >>>> + if ( p->dir == IOREQ_READ ) >>>> + { >>>> + if ( !p->data_is_ptr ) >>>> + p->data = ~0ul; >>>> + else >>>> + { >>>> + int i, sign = p->df ? -1 : 1; >>>> + uint32_t data = ~0; >>>> + >>>> + for ( i = 0; i < p->count; i++ ) >>>> + hvm_copy_to_guest_phys(p->data + sign * i * p->size, &data, >>>> + p->size); >>> This is surely bogus for an `ins` which crosses a page boundary? >> Crossing page boundaries gets dealt with up the call stack in >> hvmemul_linear_to_phys(), namely the path exiting with >> X86EMUL_UNHANDLEABLE when done == 0. > > Paul also pointed this out in person, which indicates that > hvm_copy_to_guest_phys() is indeed correct in this case. > > Therefore it is fine, but only because the caller guarentees that > "p->data + sign * i * p->size" does not cross a page boundary. > > > However, what I cant spot is any logic which copes with addr not being > aligned with bytes_per_rep. This appears to be valid in x86, and would > constitute an individual repetition accessing two pages. Just go to the place in the code I pointed you to above - that case is being taken care of afaict. Jan