qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: David Hildenbrand <david@redhat.com>
To: "Chenyi Qiang" <chenyi.qiang@intel.com>,
	"Peter Xu" <peterx@redhat.com>,
	"Alexey Kardashevskiy" <aik@amd.com>,
	"Paolo Bonzini" <pbonzini@redhat.com>,
	"Philippe Mathieu-Daudé" <philmd@linaro.org>
Cc: qemu-devel@nongnu.org, Gao Chao <chao.gao@intel.com>,
	Li Xiaoyao <xiaoyao.li@intel.com>
Subject: Re: [PATCH] ram-block-attributes: Avoid the overkill of shared memory with hugetlbfs backend
Date: Fri, 17 Oct 2025 17:13:30 +0200	[thread overview]
Message-ID: <bc7c734d-28c4-4abf-8049-a93e2e5e0b1f@redhat.com> (raw)
In-Reply-To: <20251017081445.175342-1-chenyi.qiang@intel.com>

On 17.10.25 10:14, Chenyi Qiang wrote:
> Currently, private memory and shared memory have different backend in
> CoCo VMs. It is possible for users to specify the shared memory with
> hugetlbfs backend while private memory with guest_memfd backend only
> supports 4K page size. In this case, ram_block->page_size is different
> from the host page size which will trigger the assertion when getting
> block size. Relax the restriction to allow shared memory to use
> hugetlbfs backend.
> 
> Fixes: 5d6483edaa92 ("ram-block-attributes: Introduce RamBlockAttributes to manage RAMBlock with guest_memfd")
> Signed-off-by: Chenyi Qiang <chenyi.qiang@intel.com>
> ---
>   system/ram-block-attributes.c | 7 ++++---
>   1 file changed, 4 insertions(+), 3 deletions(-)
> 
> diff --git a/system/ram-block-attributes.c b/system/ram-block-attributes.c
> index 68e8a027032..0f39ccf9090 100644
> --- a/system/ram-block-attributes.c
> +++ b/system/ram-block-attributes.c
> @@ -28,10 +28,11 @@ ram_block_attributes_get_block_size(const RamBlockAttributes *attr)
>        * Because page conversion could be manipulated in the size of at least 4K
>        * or 4K aligned, Use the host page size as the granularity to track the
>        * memory attribute.
> +     * When hugetlbfs is used as backend of shared memory, ram_block->page_size
> +     * is different from host page size. So it is not appropriate to use
> +     * ram_block->page_size here.

But are we sure everything else is working as expected and that this is 
not a check that prevents other code from doing the wrong thing?

I recall that punching holes was problematic as the VM shares/unshared 
4k chunks.

-- 
Cheers

David / dhildenb



  parent reply	other threads:[~2025-10-17 15:14 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2025-10-17  8:14 [PATCH] ram-block-attributes: Avoid the overkill of shared memory with hugetlbfs backend Chenyi Qiang
2025-10-17 13:57 ` Peter Xu
2025-10-17 15:13 ` David Hildenbrand [this message]
2025-10-20 10:32   ` Chenyi Qiang
2025-10-20 11:31     ` David Hildenbrand
2025-10-20 11:48       ` Chenyi Qiang
2025-10-20 11:34 ` David Hildenbrand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=bc7c734d-28c4-4abf-8049-a93e2e5e0b1f@redhat.com \
    --to=david@redhat.com \
    --cc=aik@amd.com \
    --cc=chao.gao@intel.com \
    --cc=chenyi.qiang@intel.com \
    --cc=pbonzini@redhat.com \
    --cc=peterx@redhat.com \
    --cc=philmd@linaro.org \
    --cc=qemu-devel@nongnu.org \
    --cc=xiaoyao.li@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).