From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH 2/5] x86/HVM: fix direct PCI port I/O emulation retry and error handling Date: Tue, 8 Oct 2013 19:13:08 +0100 Message-ID: <52544B34.4080305@citrix.com> References: <52498FFC02000078000F8068@nat28.tlf.novell.com> <5249917002000078000F807B@nat28.tlf.novell.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta5.messagelabs.com ([195.245.231.135]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1VTbmc-0002pf-WF for xen-devel@lists.xenproject.org; Tue, 08 Oct 2013 18:13:23 +0000 In-Reply-To: <5249917002000078000F807B@nat28.tlf.novell.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Jan Beulich Cc: xen-devel , Keir Fraser List-Id: xen-devel@lists.xenproject.org On 30/09/13 13:57, Jan Beulich wrote: > dpci_ioport_{read,write}() guest memory access failure handling should > be modelled after process_portio_intercept()'s (and others): Upon > encountering an error on other than the first iteration, the count > successfully handled needs to be stored and X86EMUL_OKAY returned, in > order for the generic instruction emulator to update register state > correctly before reporting failure or retrying (both of which would > only happen after re-invoking emulation). > > Further we leverage (and slightly extend, due to the above mentioned > need to return X86EMUL_OKAY) the "large MMIO" retry model. > > Note that there is still a special case not explicitly taken care of > here: While the first retry on the last iteration of a "rep ins" > correctly recovers the already read data, an eventual subsequent retry > is being handled by the pre-existing mmio-large logic (through > hvmemul_do_io() storing the [recovered] data [again], also taking into > consideration that the emulator converts a single iteration "ins" to > ->read_io() plus ->write()). > > Also fix an off-by-one in the mmio-large-read logic, and slightly > simplify the copying of the data. > > Signed-off-by: Jan Beulich Reviewed-by: Andrew Cooper One trivial thought comes to mind which you could easily do when committing the patch... > @@ -316,22 +325,51 @@ static int dpci_ioport_read(uint32_t mpo > > if ( p->data_is_ptr ) > { > - int ret; > - ret = hvm_copy_to_guest_phys(p->data + step * i, &data, p->size); > - if ( (ret == HVMCOPY_gfn_paged_out) || > - (ret == HVMCOPY_gfn_shared) ) > - return X86EMUL_RETRY; > + switch ( hvm_copy_to_guest_phys(p->data + step * i, > + &data, p->size) ) > + { > + case HVMCOPY_okay: > + break; > + case HVMCOPY_gfn_paged_out: > + case HVMCOPY_gfn_shared: > + rc = X86EMUL_RETRY; > + break; > + case HVMCOPY_bad_gfn_to_mfn: > + /* Drop the write as real hardware would. */ > + continue; > + case HVMCOPY_bad_gva_to_gfn: > + ASSERT(0); > + /* fall through */ > + default: > + rc = X86EMUL_UNHANDLEABLE; > + break; > + } > + if ( rc != X86EMUL_OKAY) > + break; > } > else > p->data = data; > } > Nuke the trailing whitespace on the line above here, which will fractionally increase the size of the hunk below. ~Andrew > - return X86EMUL_OKAY; > + if ( rc == X86EMUL_RETRY ) > + { > + vio->mmio_retry = 1; > + vio->mmio_large_read_bytes = p->size; > + memcpy(vio->mmio_large_read, &data, p->size); > + } > + > + if ( i != 0 ) > + { > + p->count = i; > + rc = X86EMUL_OKAY; > + } > + > + return rc; > } >