From: Avi Kivity <avi@redhat.com>
To: liu ping fan <qemulist@gmail.com>
Cc: Jan Kiszka <jan.kiszka@siemens.com>,
Marcelo Tosatti <mtosatti@redhat.com>,
"qemu-devel@nongnu.org" <qemu-devel@nongnu.org>,
Anthony Liguori <anthony@codemonkey.ws>,
Paolo Bonzini <pbonzini@redhat.com>
Subject: Re: [Qemu-devel] [big lock] Discussion about the convention of device's DMA each other after breaking down biglock
Date: Sun, 30 Sep 2012 10:13:50 +0200 [thread overview]
Message-ID: <5067FF3E.7040003@redhat.com> (raw)
In-Reply-To: <CAJnKYQnDVyX4VZG74aumpQ_qzJHPm_4o2R6MRX71OSP-yzU0Gw@mail.gmail.com>
On 09/29/2012 11:20 AM, liu ping fan wrote:
>
> Do we have iommus in qemu now,
We do, but they're hacked into the scsi layer, see hw/sun4m_iommu.c. I
don't know if it's a standalone iommu on real hardware or whether it is
part of the HBA.
> since there are no separate phys_maps
> for real address and dev's virt address, and I think the iommu is only
> needed by host, not guest, so need not emulated by qemu.
Eventually we will emulate iommus for x86 too, so we need to consider them.
> If no, we
> can just reject the nested DMA, and the c_p_m_rw() can only be nested
> once, so if introduce a wrapper for c_p_m_rw(), we can avoid
> recursive big lock, right?
Don't we need that for other reasons? If not, we can drop it for now.
--
error compiling committee.c: too many arguments to function
next prev parent reply other threads:[~2012-09-30 8:13 UTC|newest]
Thread overview: 45+ messages / expand[flat|nested] mbox.gz Atom feed top
2012-09-19 3:02 [Qemu-devel] [big lock] Discussion about the convention of device's DMA each other after breaking down biglock liu ping fan
2012-09-19 8:06 ` Avi Kivity
2012-09-19 9:00 ` liu ping fan
2012-09-19 9:07 ` Avi Kivity
2012-09-19 9:11 ` liu ping fan
2012-09-19 9:14 ` Paolo Bonzini
2012-09-19 9:19 ` liu ping fan
2012-09-19 9:23 ` Avi Kivity
2012-09-19 9:27 ` Jan Kiszka
2012-09-19 9:28 ` Jan Kiszka
2012-09-20 7:51 ` liu ping fan
2012-09-20 7:54 ` Paolo Bonzini
2012-09-20 8:09 ` liu ping fan
2012-09-20 8:27 ` Paolo Bonzini
2012-09-20 9:07 ` Avi Kivity
2012-09-21 7:27 ` liu ping fan
2012-09-21 8:21 ` Paolo Bonzini
2012-09-19 9:21 ` Avi Kivity
2012-09-19 9:51 ` Paolo Bonzini
2012-09-19 10:06 ` Avi Kivity
2012-09-19 10:19 ` Paolo Bonzini
2012-09-19 10:27 ` Avi Kivity
2012-09-19 9:34 ` Jan Kiszka
2012-09-19 9:50 ` Avi Kivity
2012-09-19 10:18 ` Jan Kiszka
2012-09-24 6:33 ` liu ping fan
2012-09-24 7:44 ` Avi Kivity
2012-09-24 8:32 ` liu ping fan
2012-09-24 9:42 ` Avi Kivity
2012-09-27 3:13 ` liu ping fan
2012-09-27 9:16 ` Avi Kivity
2012-09-27 9:29 ` Paolo Bonzini
2012-09-27 9:34 ` Avi Kivity
2012-09-27 9:36 ` Paolo Bonzini
2012-09-27 10:08 ` Avi Kivity
2012-09-27 10:22 ` Paolo Bonzini
2012-09-27 10:48 ` Avi Kivity
2012-09-29 9:20 ` liu ping fan
2012-09-30 8:13 ` Avi Kivity [this message]
2012-09-30 8:48 ` liu ping fan
2012-09-30 11:18 ` Avi Kivity
2012-09-30 11:04 ` Blue Swirl
2012-09-30 11:17 ` Avi Kivity
2012-09-30 11:48 ` Blue Swirl
2012-09-20 8:11 ` liu ping fan
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=5067FF3E.7040003@redhat.com \
--to=avi@redhat.com \
--cc=anthony@codemonkey.ws \
--cc=jan.kiszka@siemens.com \
--cc=mtosatti@redhat.com \
--cc=pbonzini@redhat.com \
--cc=qemu-devel@nongnu.org \
--cc=qemulist@gmail.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).