From mboxrd@z Thu Jan 1 00:00:00 1970 Received: from eggs.gnu.org ([208.118.235.92]:51754) by lists.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG5Bi-0005zi-Hj for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:42:54 -0400 Received: from Debian-exim by eggs.gnu.org with spam-scanned (Exim 4.71) (envelope-from ) id 1TG5Bc-0002gB-7q for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:42:50 -0400 Received: from mx1.redhat.com ([209.132.183.28]:50404) by eggs.gnu.org with esmtp (Exim 4.71) (envelope-from ) id 1TG5Bb-0002e8-Ve for qemu-devel@nongnu.org; Mon, 24 Sep 2012 05:42:44 -0400 Message-ID: <50602B0A.1020403@redhat.com> Date: Mon, 24 Sep 2012 11:42:34 +0200 From: Avi Kivity MIME-Version: 1.0 References: <50597D1F.3070607@redhat.com> <505991A2.6090709@siemens.com> <5059954A.50408@redhat.com> <50600F7B.5080106@redhat.com> In-Reply-To: Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: quoted-printable Subject: Re: [Qemu-devel] [big lock] Discussion about the convention of device's DMA each other after breaking down biglock List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , To: liu ping fan Cc: Jan Kiszka , Marcelo Tosatti , "qemu-devel@nongnu.org" , Anthony Liguori , Paolo Bonzini On 09/24/2012 10:32 AM, liu ping fan wrote: > On Mon, Sep 24, 2012 at 3:44 PM, Avi Kivity wrote: >> On 09/24/2012 08:33 AM, liu ping fan wrote: >>> On Wed, Sep 19, 2012 at 5:50 PM, Avi Kivity wrote: >>> > On 09/19/2012 12:34 PM, Jan Kiszka wrote: >>> >> >>> >> What about the following: >>> >> >>> >> What we really need to support in practice is MMIO access triggers= RAM >>> >> access of device model. Scenarios where a device access triggers a= nother >>> >> MMIO access could likely just be rejected without causing troubles. >>> >> >>> >> So, when we dispatch a request to a device, we mark that the curre= nt >>> >> thread is in a MMIO dispatch and reject any follow-up c_p_m_rw tha= t does >>> >> _not_ target RAM, ie. is another, nested MMIO request - independen= t of >>> >> its destination. How much of the known issues would this solve? An= d what >>> >> would remain open? >>> > >>> > Various iommu-like devices re-dispatch I/O, like changing endiannes= s or >>> > bitband. I don't know whether it targets I/O rather than RAM. >>> > >>> Have not found the exact code. But I think the call chain may look >>> like this: dev mmio-handler --> c_p_m_rw() --> iommu mmio-handler --> >>> c_p_m_rw() >>> And I think you worry about the case for "c_p_m_rw() --> iommu >>> mmio-handler". Right? How about introduce an member can_nest for >>> MemoryRegionOps of iommu's mr? >>> >> >> I would rather push the iommu logic into the memory API: >> >> memory_region_init_iommu(MemoryRegion *mr, const char *name, >> MemoryRegion *target, MemoryRegionIOMMUOps = *ops, >> unsigned size) >> >> struct MemoryRegionIOMMUOps { >> target_physical_addr_t (*translate)(target_physical_addr_t addr, >> bool write); >> void (*fault)(target_physical_addr_t addr); >> }; >> > So I guess, after introduce this, the code logic in c_p_m_rw() will > look like this >=20 > c_p_m_rw(dev_virt_addr, ...) > { > mr =3D phys_page_lookup(); > if (mr->iommu_ops) > real_addr =3D translate(dev_virt_addr,..)=EF=BC=9B >=20 > ptr =3D qemu_get_ram_ptr(real_addr); > memcpy(buf, ptr, sz); > } >=20 Something like that. It will be a while loop, to allow for iommus strung in series. --=20 error compiling committee.c: too many arguments to function