From mboxrd@z Thu Jan 1 00:00:00 1970 From: Jan Kiszka Subject: device-assignment: difference between assigned_dev_iomem_map and ...map_slow Date: Thu, 21 Apr 2011 18:23:01 +0200 Message-ID: <4DB059E5.7000003@siemens.com> Mime-Version: 1.0 Content-Type: text/plain; charset=ISO-8859-15 Content-Transfer-Encoding: 7bit To: kvm Return-path: Received: from thoth.sbs.de ([192.35.17.2]:23552 "EHLO thoth.sbs.de" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1754491Ab1DUQXE (ORCPT ); Thu, 21 Apr 2011 12:23:04 -0400 Received: from mail1.siemens.de (localhost [127.0.0.1]) by thoth.sbs.de (8.13.6/8.13.6) with ESMTP id p3LGN174002039 for ; Thu, 21 Apr 2011 18:23:01 +0200 Received: from mchn199C.mchp.siemens.de ([139.25.246.207]) by mail1.siemens.de (8.13.6/8.13.6) with ESMTP id p3LGN18M011131 for ; Thu, 21 Apr 2011 18:23:01 +0200 Sender: kvm-owner@vger.kernel.org List-ID: Hi, latest qemu-kvm bails out on cleanup as it tries to call cpu_register_physical_memory with a zero-sized region of an assigned device. That made me dig into the setup/cleanup of memory mapped io regions, trying to consolidate and fix the code. What are the differences between normal and slow mmio regions? The former are mapped directly to the physical device (via qemu_ram_alloc_from_ptr + cpu_register_physical_memory), the latter have to be dispatched in user land (thus cpu_register_io_memory + cpu_register_physical_memory), right? But why do we need to postpone cpu_register_io_memory to assigned_dev_iomem_map_slow? It looks like that's effectively the only difference between both mapping callbacks (subtracting some bugs and dead code). Can't we set up the io region in assigned_dev_register_regions analogously to normal regions? BTW, the current code is leaking the slow io region on cleanup. Comments appreciated, will translate them into a cleanup patch series. Thanks, Jan (who wanted to do something completely different...) -- Siemens AG, Corporate Technology, CT T DE IT 1 Corporate Competence Center Embedded Linux