From mboxrd@z Thu Jan 1 00:00:00 1970 From: Andrew Cooper Subject: Re: [PATCH v5 09/16] x86/hvm: limit reps to avoid the need to handle retry Date: Thu, 2 Jul 2015 18:31:20 +0100 Message-ID: <55957568.7070505@citrix.com> References: <1435669558-5421-1-git-send-email-paul.durrant@citrix.com> <1435669558-5421-10-git-send-email-paul.durrant@citrix.com> <55957089.2020304@citrix.com> <9AAE0902D5BC7E449B7C8E4E778ABCD025979AB3@AMSPEX01CL02.citrite.net> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: Received: from mail6.bemta3.messagelabs.com ([195.245.230.39]) by lists.xen.org with esmtp (Exim 4.72) (envelope-from ) id 1ZAiKb-00074I-I1 for xen-devel@lists.xenproject.org; Thu, 02 Jul 2015 17:31:25 +0000 In-Reply-To: <9AAE0902D5BC7E449B7C8E4E778ABCD025979AB3@AMSPEX01CL02.citrite.net> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: Paul Durrant , "xen-devel@lists.xenproject.org" Cc: "Keir (Xen.org)" , Jan Beulich List-Id: xen-devel@lists.xenproject.org On 02/07/15 18:14, Paul Durrant wrote: >> -----Original Message----- >> From: Andrew Cooper [mailto:andrew.cooper3@citrix.com] >> Sent: 02 July 2015 18:11 >> To: Paul Durrant; xen-devel@lists.xenproject.org >> Cc: Keir (Xen.org); Jan Beulich >> Subject: Re: [PATCH v5 09/16] x86/hvm: limit reps to avoid the need to >> handle retry >> >> On 30/06/15 14:05, Paul Durrant wrote: >>> @@ -235,7 +219,7 @@ static int hvmemul_do_io_buffer( >>> >>> BUG_ON(buffer == NULL); >>> >>> - rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 0, >>> + rc = hvmemul_do_io(is_mmio, addr, *reps, size, dir, df, 0, >>> (uintptr_t)buffer); >>> if ( rc == X86EMUL_UNHANDLEABLE && dir == IOREQ_READ ) >>> memset(buffer, 0xff, size); >>> @@ -287,17 +271,53 @@ static int hvmemul_do_io_addr( >>> bool_t is_mmio, paddr_t addr, unsigned long *reps, >>> unsigned int size, uint8_t dir, bool_t df, paddr_t ram_gpa) >>> { >>> - struct page_info *ram_page; >>> + struct vcpu *v = current; >> curr. >> >>> + unsigned long ram_gmfn = paddr_to_pfn(ram_gpa); >> ram_gfn. >> >>> + unsigned int page_off = ram_gpa & (PAGE_SIZE - 1); >> offset and ~PAGE_MASK. >> >>> + struct page_info *ram_page[2]; >>> + int nr_pages = 0; >> unsigned int. >> >>> + unsigned long count; >>> int rc; >>> >>> - rc = hvmemul_acquire_page(paddr_to_pfn(ram_gpa), &ram_page); >>> + rc = hvmemul_acquire_page(ram_gmfn, &ram_page[nr_pages]); >>> if ( rc != X86EMUL_OKAY ) >>> - return rc; >>> + goto out; >>> >>> - rc = hvmemul_do_io(is_mmio, addr, reps, size, dir, df, 1, >>> + nr_pages++; >>> + >>> + /* Detemine how many reps will fit within this page */ >>> + count = min_t(unsigned long, >>> + *reps, >>> + df ? >>> + (page_off + size - 1) / size : >>> + (PAGE_SIZE - page_off) / size); >>> + >>> + if ( count == 0 ) >>> + { >>> + /* >>> + * This access must span two pages, so grab a reference to >>> + * the next page and do a single rep. >>> + */ >>> + rc = hvmemul_acquire_page(df ? ram_gmfn - 1 : ram_gmfn + 1, >>> + &ram_page[nr_pages]); >> All guest-based ways to trigger an IO spanning a page boundary will be >> based on linear address. If a guest has paging enabled, this movement >> to an adjacent physical is not valid. A new pagetable walk will be >> required to determine the correct second page. > I don't think that is true. hvmemul_linear_to_phys() will break at non-contiguous boundaries. Hmm - it looks like it will bail with unhandleable on a straddled access across a non-contiguous boundary. In which case a comment confirming the safety of the +/- 1 will be useful to the next person who follows the same track of logic as I did. ~Andrew