From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:45279) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG3Lw-0001IF-Jh for qemu-devel@nongnu.org; Mon, 24 Sep 2012 03:45:17 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TG3Lq-0003fy-PB for qemu-devel@nongnu.org; Mon, 24 Sep 2012 03:45:16 -0400 Received: from mx1.redhat.com ([209.132.183.28]:23717) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG3Lq-0003eG-Et for qemu-devel@nongnu.org; Mon, 24 Sep 2012 03:45:10 -0400 Message-ID: <50600F7B.5080106@redhat.com> Date: Mon, 24 Sep 2012 09:44:59 +0200 From: Avi Kivity MIME-Version: 1.0 References: <50597D1F.3070607@redhat.com> <505991A2.6090709@siemens.com> <5059954A.50408@redhat.com> In-Reply-To: Content-Type: text/plain; charset=ISO-8859-1 Content-Transfer-Encoding: 7bit Subject: Re: [Qemu-devel] [big lock] Discussion about the convention of device's DMA each other after breaking down biglock List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: liu ping fan Cc: Jan Kiszka , Marcelo Tosatti , "qemu-devel@nongnu.org" , Anthony Liguori , Paolo Bonzini On 09/24/2012 08:33 AM, liu ping fan wrote: > On Wed, Sep 19, 2012 at 5:50 PM, Avi Kivity wrote: > > On 09/19/2012 12:34 PM, Jan Kiszka wrote: > >> > >> What about the following: > >> > >> What we really need to support in practice is MMIO access triggers RAM > >> access of device model. Scenarios where a device access triggers another > >> MMIO access could likely just be rejected without causing troubles. > >> > >> So, when we dispatch a request to a device, we mark that the current > >> thread is in a MMIO dispatch and reject any follow-up c_p_m_rw that does > >> _not_ target RAM, ie. is another, nested MMIO request - independent of > >> its destination. How much of the known issues would this solve? And what > >> would remain open? > > > > Various iommu-like devices re-dispatch I/O, like changing endianness or > > bitband. I don't know whether it targets I/O rather than RAM. > > > Have not found the exact code. But I think the call chain may look > like this: dev mmio-handler --> c_p_m_rw() --> iommu mmio-handler --> > c_p_m_rw() > And I think you worry about the case for "c_p_m_rw() --> iommu > mmio-handler". Right? How about introduce an member can_nest for > MemoryRegionOps of iommu's mr? > I would rather push the iommu logic into the memory API: memory_region_init_iommu(MemoryRegion *mr, const char *name, MemoryRegion *target, MemoryRegionIOMMUOps *ops, unsigned size) struct MemoryRegionIOMMUOps { target_physical_addr_t (*translate)(target_physical_addr_t addr, bool write); void (*fault)(target_physical_addr_t addr); }; I'll look at a proposal for this. It's a generalized case of memory_region_init_alias(). -- I have a truly marvellous patch that fixes the bug which this signature is too narrow to contain.