qemu-devel.nongnu.org archive mirror
 help / color / mirror / Atom feed
From: Peter Xu <peterx@redhat.com>
To: Fabiano Rosas <farosas@suse.de>
Cc: qemu-devel@nongnu.org, Vitaly Kuznetsov <vkuznets@redhat.com>,
	David Hildenbrand <david@redhat.com>,
	Paolo Bonzini <pbonzini@redhat.com>,
	Prasad Pandit <ppandit@redhat.com>,
	Juraj Marcin <jmarcin@redhat.com>,
	Julia Suvorova <jusual@redhat.com>,
	qemu-stable <qemu-stable@nongnu.org>,
	Zhiyi Guo <zhguo@redhat.com>
Subject: Re: [PATCH v3 1/4] KVM: Dynamic sized kvm memslots array
Date: Tue, 17 Sep 2024 11:52:51 -0400	[thread overview]
Message-ID: <Zuml03p5y5ip6e7e@x1n> (raw)
In-Reply-To: <87y13rzdtn.fsf@suse.de>

On Mon, Sep 16, 2024 at 02:52:04PM -0300, Fabiano Rosas wrote:
> >>  
> >> +    /*
> >> +     * A VM will at least require a few memslots to work, or it can even
> >> +     * fail to boot.  Make sure the supported value is always at least
> >> +     * larger than what we will initially allocate.
> >
> > The commit message says 16 was chosen to cover basic usage, which is
> > fine. But here we're disallowing anything smaller. Shouldn't QEMU always
> > respect what KVM decided? Of course, setting aside bugs or other
> > scenarios that could result in the ioctl returning 0. Could some kernel
> > implementation at some point want to reduce the max number of memslots
> > and then get effectively denied because QEMU thinks otherwise?

I'd say it's unlikely to happen, but indeed failing it here might be based
too much over the artificial KVM_MEMSLOTS_NR_ALLOC_DEFAULT I came up with.

If this check is removed, I suppose we're still fine.  So it means later
when qemu initializes kvm memslots here:

    kvm_slots_grow(kml, KVM_MEMSLOTS_NR_ALLOC_DEFAULT);

It'll be throttled by whatever kvm specified lower than 16. Then if
memslots are not enough, we'll fail at memslot allocation whenever
requested more than what kvm offers:

    fprintf(stderr, "%s: no free slot available\n", __func__);
    abort();

Yeah, maybe it is better to fail here.. Even though I think we'll never use
it, but still good to remove some lines if they're not needed.  Let me
remove this check in my next post.

Thanks,

-- 
Peter Xu



  reply	other threads:[~2024-09-17 15:53 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2024-09-09 14:54 [PATCH v3 0/4] KVM: Dynamic sized memslots array Peter Xu
2024-09-09 14:54 ` [PATCH v3 1/4] KVM: Dynamic sized kvm " Peter Xu
2024-09-16 17:47   ` Fabiano Rosas
2024-09-16 17:52     ` Fabiano Rosas
2024-09-17 15:52       ` Peter Xu [this message]
2024-09-09 14:54 ` [PATCH v3 2/4] KVM: Define KVM_MEMSLOTS_NUM_MAX_DEFAULT Peter Xu
2024-09-09 14:54 ` [PATCH v3 3/4] KVM: Rename KVMMemoryListener.nr_used_slots to nr_slots_used Peter Xu
2024-09-09 14:54 ` [PATCH v3 4/4] KVM: Rename KVMState->nr_slots to nr_slots_max Peter Xu

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=Zuml03p5y5ip6e7e@x1n \
    --to=peterx@redhat.com \
    --cc=david@redhat.com \
    --cc=farosas@suse.de \
    --cc=jmarcin@redhat.com \
    --cc=jusual@redhat.com \
    --cc=pbonzini@redhat.com \
    --cc=ppandit@redhat.com \
    --cc=qemu-devel@nongnu.org \
    --cc=qemu-stable@nongnu.org \
    --cc=vkuznets@redhat.com \
    --cc=zhguo@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).