From: Dario Faggioli <dario.faggioli@citrix.com>
To: Jan Beulich <JBeulich@suse.com>, Quan Xu <quan.xu@intel.com>
Cc: Kevin Tian <kevin.tian@intel.com>, Feng Wu <feng.wu@intel.com>,
"george.dunlap@eu.citrix.com" <george.dunlap@eu.citrix.com>,
"andrew.cooper3@citrix.com" <andrew.cooper3@citrix.com>,
"tim@xen.org" <tim@xen.org>,
"xen-devel@lists.xen.org" <xen-devel@lists.xen.org>
Subject: Re: [PATCH v5 5/7] VT-d: Refactor iommu_ops .map_page() and unmap_page()
Date: Thu, 25 Feb 2016 14:12:04 +0100 [thread overview]
Message-ID: <1456405924.6288.113.camel@citrix.com> (raw)
In-Reply-To: <56CF005F02000078000D626C@prv-mh.provo.novell.com>
[-- Attachment #1.1: Type: text/plain, Size: 1039 bytes --]
On Thu, 2016-02-25 at 05:23 -0700, Jan Beulich wrote:
> > > > On 25.02.16 at 13:14, <quan.xu@intel.com> wrote:
> >
> > To me, this might be fine.
> > Does Per-CPU flag refer to this_cpu(iommu_dont_flush_iotlb) or
> > variant?
>
> Yes. But I'd prefer ...
>
> > > However, the same effect could be achieved
> > > by making the lock a recursive one, which would then seem to more
> > > conventional approach (but requiring as much code to be touched).
> > > Both approached would eliminate the need to pass down "locked"
> > > flags.
>
> ... this one (the more that the other won't mean less changes).
>
FWIW (which is, very few, given my very limited experience with this
code, yet :-)) I also think the recursive lock way is better.
Regards,
Dario
--
<<This happens because I choose it to happen!>> (Raistlin Majere)
-----------------------------------------------------------------
Dario Faggioli, Ph.D, http://about.me/dario.faggioli
Senior Software Engineer, Citrix Systems R&D Ltd., Cambridge (UK)
[-- Attachment #1.2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 181 bytes --]
[-- Attachment #2: Type: text/plain, Size: 126 bytes --]
_______________________________________________
Xen-devel mailing list
Xen-devel@lists.xen.org
http://lists.xen.org/xen-devel
next prev parent reply other threads:[~2016-02-25 13:12 UTC|newest]
Thread overview: 17+ messages / expand[flat|nested] mbox.gz Atom feed top
2016-02-25 6:56 [PATCH v5 5/7] VT-d: Refactor iommu_ops .map_page() and unmap_page() Xu, Quan
2016-02-25 8:59 ` Jan Beulich
2016-02-25 12:14 ` Xu, Quan
2016-02-25 12:23 ` Jan Beulich
2016-02-25 13:12 ` Dario Faggioli [this message]
2016-02-26 1:55 ` Xu, Quan
2016-02-26 7:37 ` Xu, Quan
2016-02-26 8:14 ` Jan Beulich
2016-02-26 8:21 ` Jan Beulich
2016-02-26 9:24 ` Xu, Quan
2016-02-26 10:11 ` Jan Beulich
2016-02-26 11:48 ` Xu, Quan
2016-02-26 12:33 ` Jan Beulich
2016-02-26 10:08 ` Xu, Quan
2016-02-26 10:13 ` Jan Beulich
-- strict thread matches above, loose matches on Subject: below --
2016-02-05 10:18 [PATCH v5 0/7] VT-d Device-TLB flush issue Quan Xu
2016-02-05 10:18 ` [PATCH v5 5/7] VT-d: Refactor iommu_ops .map_page() and unmap_page() Quan Xu
2016-02-17 14:23 ` Jan Beulich
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=1456405924.6288.113.camel@citrix.com \
--to=dario.faggioli@citrix.com \
--cc=JBeulich@suse.com \
--cc=andrew.cooper3@citrix.com \
--cc=feng.wu@intel.com \
--cc=george.dunlap@eu.citrix.com \
--cc=kevin.tian@intel.com \
--cc=quan.xu@intel.com \
--cc=tim@xen.org \
--cc=xen-devel@lists.xen.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).