From mboxrd@z Thu Jan 1 00:00:00 1970 From: Arianna Avanzini Subject: [PATCH v9 04/14] arch/arm: unmap partially-mapped I/O-memory regions Date: Wed, 2 Jul 2014 20:42:13 +0200 Message-ID: <1404326543-16875-5-git-send-email-avanzini.arianna@gmail.com> References: <1404326543-16875-1-git-send-email-avanzini.arianna@gmail.com> Mime-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Return-path: In-Reply-To: <1404326543-16875-1-git-send-email-avanzini.arianna@gmail.com> List-Unsubscribe: , List-Post: List-Help: List-Subscribe: , Sender: xen-devel-bounces@lists.xen.org Errors-To: xen-devel-bounces@lists.xen.org To: xen-devel@lists.xen.org Cc: Ian.Campbell@eu.citrix.com, paolo.valente@unimore.it, keir@xen.org, stefano.stabellini@eu.citrix.com, Ian.Jackson@eu.citrix.com, dario.faggioli@citrix.com, tim@xen.org, julien.grall@citrix.com, etrudeau@broadcom.com, andrew.cooper3@citrix.com, JBeulich@suse.com, avanzini.arianna@gmail.com, viktor.kleinik@globallogic.com List-Id: xen-devel@lists.xenproject.org This commit changes the function apply_p2m_changes() to unwind changes performed while mapping an I/O-memory region, if errors are seen during the operation. This is useful to avoid that I/O-memory areas are partially accessible to guests. Signed-off-by: Arianna Avanzini Cc: Dario Faggioli Cc: Paolo Valente Cc: Stefano Stabellini Cc: Julien Grall Cc: Ian Campbell Cc: Jan Beulich Cc: Keir Fraser Cc: Tim Deegan Cc: Ian Jackson Cc: Andrew Cooper Cc: Eric Trudeau Cc: Viktor Kleinik --- v9: - Let apply_p2m_ranges() unwind its own progress instead of relying on the caller to unmap partially-mapped I/O-memory regions. --- xen/arch/arm/p2m.c | 16 ++++++++++++---- 1 file changed, 12 insertions(+), 4 deletions(-) diff --git a/xen/arch/arm/p2m.c b/xen/arch/arm/p2m.c index 22646e9..92fc4ec 100644 --- a/xen/arch/arm/p2m.c +++ b/xen/arch/arm/p2m.c @@ -314,7 +314,7 @@ static int apply_p2m_changes(struct domain *d, unsigned long cur_first_page = ~0, cur_first_offset = ~0, cur_second_offset = ~0; - unsigned long count = 0; + unsigned long count = 0, inserted = 0; unsigned int flush = 0; bool_t populate = (op == INSERT || op == ALLOCATE); lpae_t pte; @@ -328,6 +328,7 @@ static int apply_p2m_changes(struct domain *d, spin_lock(&p2m->lock); +unwind: addr = start_gpaddr; while ( addr < end_gpaddr ) { @@ -338,7 +339,9 @@ static int apply_p2m_changes(struct domain *d, if ( !first ) { rc = -EINVAL; - goto out; + end_gpaddr = start_gpaddr + inserted * PAGE_SIZE; + op = REMOVE; + goto unwind; } cur_first_page = p2m_first_level_index(addr); } @@ -357,7 +360,9 @@ static int apply_p2m_changes(struct domain *d, if ( rc < 0 ) { printk("p2m_populate_ram: L1 failed\n"); - goto out; + end_gpaddr = start_gpaddr + inserted * PAGE_SIZE; + op = REMOVE; + goto unwind; } } @@ -384,7 +389,9 @@ static int apply_p2m_changes(struct domain *d, flush_pt); if ( rc < 0 ) { printk("p2m_populate_ram: L2 failed\n"); - goto out; + end_gpaddr = start_gpaddr + inserted * PAGE_SIZE; + op = REMOVE; + goto unwind; } } @@ -441,6 +448,7 @@ static int apply_p2m_changes(struct domain *d, pte = mfn_to_p2m_entry(maddr >> PAGE_SHIFT, mattr, t); p2m_write_pte(&third[third_table_offset(addr)], pte, flush_pt); + inserted++; } break; case REMOVE: -- 1.9.3