From: Chenyi Qiang <chenyi.qiang@intel.com>
To: "David Hildenbrand" <david@redhat.com>,
"Paolo Bonzini" <pbonzini@redhat.com>,
"Peter Xu" <peterx@redhat.com>,
"Philippe Mathieu-Daudé" <philmd@linaro.org>,
"Michael Roth" <michael.roth@amd.com>
Cc: <qemu-devel@nongnu.org>, <kvm@vger.kernel.org>,
Williams Dan J <dan.j.williams@intel.com>,
Edgecombe Rick P <rick.p.edgecombe@intel.com>,
Wang Wei W <wei.w.wang@intel.com>,
Peng Chao P <chao.p.peng@intel.com>,
"Gao Chao" <chao.gao@intel.com>, Wu Hao <hao.wu@intel.com>,
Xu Yilun <yilun.xu@intel.com>
Subject: Re: [RFC PATCH 0/6] Enable shared device assignment
Date: Fri, 2 Aug 2024 15:00:05 +0800 [thread overview]
Message-ID: <de021f3f-4ccd-4d51-a3ed-439ed9f23515@intel.com> (raw)
In-Reply-To: <d299bbad-81bc-462e-91b5-a6d9c27ffe3a@redhat.com>
On 7/31/2024 7:18 PM, David Hildenbrand wrote:
> Sorry for the late reply!
>
>>> Current users must skip it, yes. How private memory would have to be
>>> handled, and who would handle it, is rather unclear.
>>>
>>> Again, maybe we'd want separate RamDiscardManager for private and shared
>>> memory (after all, these are two separate memory backends).
>>
>> We also considered distinguishing the populate and discard operation for
>> private and shared memory separately. As in method 2 above, we mentioned
>> to add a new argument to indicate the memory attribute to operate on.
>> They seem to have a similar idea.
>
> Yes. Likely it's just some implementation detail. I think the following
> states would be possible:
>
> * Discarded in shared + discarded in private (not populated)
> * Discarded in shared + populated in private (private populated)
> * Populated in shared + discarded in private (shared populated)
>
> One could map these to states discarded/private/shared indeed.
Make sense. We can follow this if the mechanism of RamDiscardManager is
acceptable and no other concerns.
>
> [...]
>
>>> I've had this talk with Intel, because the 4K granularity is a pain. I
>>> was told that ship has sailed ... and we have to cope with random 4K
>>> conversions :(
>>>
>>> The many mappings will likely add both memory and runtime overheads in
>>> the kernel. But we only know once we measure.
>>
>> In the normal case, the main runtime overhead comes from
>> private<->shared flip in SWIOTLB, which defaults to 6% of memory with a
>> maximum of 1Gbyte. I think this overhead is acceptable. In non-default
>> case, e.g. dynamic allocated DMA buffer, the runtime overhead will
>> increase. As for the memory overheads, It is indeed unavoidable.
>>
>> Will these performance issues be a deal breaker for enabling shared
>> device assignment in this way?
>
> I see the most problematic part being the dma_entry_limit and all of
> these individual MAP/UNMAP calls on 4KiB granularity.
>
> dma_entry_limit is "unsigned int", and defaults to U16_MAX. So the
> possible maximum should be 4294967296, and the default is 65535.
>
> So we should be able to have a maximum of 16 TiB shared memory all in
> 4KiB chunks.
>
> sizeof(struct vfio_dma) is probably something like <= 96 bytes, implying
> a per-page overhead of ~2.4%, excluding the actual rbtree.
>
> Tree lookup/modifications with that many nodes might also get a bit
> slower, but likely still tolerable as you note.
>
> Deal breaker? Not sure. Rather "suboptimal" :) ... but maybe unavoidable
> for your use case?
Yes. We can't guarantee the behavior of guest, so the overhead would be
uncertain and unavoidable.
>
next prev parent reply other threads:[~2024-08-02 7:01 UTC|newest]
Thread overview: 22+ messages / expand[flat|nested] mbox.gz Atom feed top
2024-07-25 7:21 [RFC PATCH 0/6] Enable shared device assignment Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 1/6] guest_memfd: Introduce an object to manage the guest-memfd with RamDiscardManager Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 2/6] guest_memfd: Introduce a helper to notify the shared/private state change Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 3/6] KVM: Notify the state change via RamDiscardManager helper during shared/private conversion Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 4/6] memory: Register the RamDiscardManager instance upon guest_memfd creation Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 5/6] guest-memfd: Default to discarded (private) in guest_memfd_manager Chenyi Qiang
2024-07-25 7:21 ` [RFC PATCH 6/6] RAMBlock: make guest_memfd require coordinate discard Chenyi Qiang
2024-07-25 14:04 ` [RFC PATCH 0/6] Enable shared device assignment David Hildenbrand
2024-07-26 5:02 ` Tian, Kevin
2024-07-26 7:08 ` David Hildenbrand
2024-07-31 7:12 ` Xu Yilun
2024-07-31 11:05 ` David Hildenbrand
2024-07-26 6:20 ` Chenyi Qiang
2024-07-26 7:20 ` David Hildenbrand
2024-07-26 10:56 ` Chenyi Qiang
2024-07-31 11:18 ` David Hildenbrand
2024-08-02 7:00 ` Chenyi Qiang [this message]
2024-08-01 7:32 ` Yin, Fengwei
2024-08-16 3:02 ` Chenyi Qiang
2024-10-08 8:59 ` Chenyi Qiang
2024-11-15 16:47 ` Rob Nertney
2024-11-15 17:20 ` David Hildenbrand
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=de021f3f-4ccd-4d51-a3ed-439ed9f23515@intel.com \
--to=chenyi.qiang@intel.com \
--cc=chao.gao@intel.com \
--cc=chao.p.peng@intel.com \
--cc=dan.j.williams@intel.com \
--cc=david@redhat.com \
--cc=hao.wu@intel.com \
--cc=kvm@vger.kernel.org \
--cc=michael.roth@amd.com \
--cc=pbonzini@redhat.com \
--cc=peterx@redhat.com \
--cc=philmd@linaro.org \
--cc=qemu-devel@nongnu.org \
--cc=rick.p.edgecombe@intel.com \
--cc=wei.w.wang@intel.com \
--cc=yilun.xu@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).