From: Sean Christopherson <seanjc@google.com>
To: Shivam Kumar <shivam.kumar1@nutanix.com>
Cc: pbonzini@redhat.com, kvm@vger.kernel.org,
Shaju Abraham <shaju.abraham@nutanix.com>,
Manish Mishra <manish.mishra@nutanix.com>,
Anurag Madnawat <anurag.madnawat@nutanix.com>
Subject: Re: [PATCH v3 2/3] KVM: Documentation: Update kvm_run structure for dirty quota
Date: Thu, 31 Mar 2022 00:40:14 +0000 [thread overview]
Message-ID: <YkT4bvK+tbsVDAvt@google.com> (raw)
In-Reply-To: <20220306220849.215358-3-shivam.kumar1@nutanix.com>
On Sun, Mar 06, 2022, Shivam Kumar wrote:
> Update the kvm_run structure with a brief description of dirty
> quota members and how dirty quota throttling works.
This should be squashed with patch 1. I actually had to look ahead to this patch
because I forgot the details since I last reviewed this :-)
> Suggested-by: Shaju Abraham <shaju.abraham@nutanix.com>
> Suggested-by: Manish Mishra <manish.mishra@nutanix.com>
> Co-developed-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Anurag Madnawat <anurag.madnawat@nutanix.com>
> Signed-off-by: Shivam Kumar <shivam.kumar1@nutanix.com>
> ---
> Documentation/virt/kvm/api.rst | 28 ++++++++++++++++++++++++++++
> 1 file changed, 28 insertions(+)
>
> diff --git a/Documentation/virt/kvm/api.rst b/Documentation/virt/kvm/api.rst
> index 9f3172376ec3..50e001473b1f 100644
> --- a/Documentation/virt/kvm/api.rst
> +++ b/Documentation/virt/kvm/api.rst
> @@ -6125,6 +6125,23 @@ array field represents return values. The userspace should update the return
> values of SBI call before resuming the VCPU. For more details on RISC-V SBI
> spec refer, https://github.com/riscv/riscv-sbi-doc.
>
> +::
> +
> + /* KVM_EXIT_DIRTY_QUOTA_EXHAUSTED */
> + struct {
> + __u64 count;
> + __u64 quota;
> + } dirty_quota_exit;
> +If exit reason is KVM_EXIT_DIRTY_QUOTA_EXHAUSTED, it indicates that the VCPU has
> +exhausted its dirty quota. The 'dirty_quota_exit' member of kvm_run structure
> +makes the following information available to the userspace:
> + 'count' field: the current count of pages dirtied by the VCPU,
> + 'quota' field: the observed dirty quota just before the exit to userspace.
> +The userspace can design a strategy to allocate the overall scope of dirtying
> +for the VM among the vcpus. Based on the strategy and the current state of dirty
> +quota throttling, the userspace can make a decision to either update (increase)
> +the quota or to put the VCPU to sleep for some time.
> +
> ::
>
> /* Fix the size of the union. */
> @@ -6159,6 +6176,17 @@ values in kvm_run even if the corresponding bit in kvm_dirty_regs is not set.
>
> ::
>
> + /*
> + * Number of pages the vCPU is allowed to have dirtied over its entire
> + * lifetime. KVM_RUN exits with KVM_EXIT_DIRTY_QUOTA_EXHAUSTED if the quota
> + * is reached/exceeded.
> + */
> + __u64 dirty_quota;
> +Please note that this quota cannot be strictly enforced if PML is enabled, and
> +the VCPU may end up dirtying pages more than its quota. The difference however
> +is bounded by the PML buffer size.
If you want to be pedantic, I doubt KVM can strictly enforce the quota even if PML
is disabled. E.g. I can all but guarantee that it's possible to dirty multiple
pages during a single exit. Probably also worth spelling out PML and genericizing
things. Maybe
Please note that enforcing the quota is best effort, as the guest may dirty
multiple pages before KVM can recheck the quota. However, unless KVM is using
a hardware-based dirty ring buffer, e.g. Intel's Page Modification Logging,
KVM will detect quota exhaustion within a handful of dirtied page. If a
hardware ring buffer is used, the overrun is bounded by the size of the buffer
(512 entries for PML).
next prev parent reply other threads:[~2022-03-31 0:40 UTC|newest]
Thread overview: 26+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-03-06 22:08 [PATCH v3 0/3] KVM: Dirty quota-based throttling Shivam Kumar
2022-03-06 22:08 ` [PATCH v3 1/3] KVM: Implement dirty quota-based throttling of vcpus Shivam Kumar
2022-03-31 0:28 ` Sean Christopherson
2022-03-31 7:20 ` Shivam Kumar
2022-03-31 15:37 ` Sean Christopherson
2022-04-06 12:32 ` Shivam Kumar
2022-05-02 22:14 ` Peter Xu
2022-05-03 7:22 ` Shivam Kumar
2022-05-03 13:43 ` Peter Xu
2022-05-04 6:33 ` Shivam Kumar
2022-05-04 17:26 ` Peter Xu
2022-05-05 15:17 ` Shivam Kumar
2022-03-06 22:08 ` [PATCH v3 2/3] KVM: Documentation: Update kvm_run structure for dirty quota Shivam Kumar
2022-03-31 0:40 ` Sean Christopherson [this message]
2022-03-31 7:30 ` Shivam Kumar
2022-03-31 15:24 ` Sean Christopherson
2022-04-01 13:49 ` Sean Christopherson
2022-04-06 12:39 ` Shivam Kumar
2022-04-06 12:44 ` Shivam Kumar
2022-03-06 22:08 ` [PATCH v3 3/3] KVM: selftests: Add selftests for dirty quota throttling Shivam Kumar
2022-04-18 4:55 ` Shivam Kumar
2022-04-18 4:59 ` Shivam Kumar
2022-04-18 16:17 ` Sean Christopherson
2022-04-28 7:00 ` Shivam Kumar
2022-04-28 23:59 ` Sean Christopherson
2022-03-19 18:21 ` [PATCH v3 0/3] KVM: Dirty quota-based throttling Shivam Kumar
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=YkT4bvK+tbsVDAvt@google.com \
--to=seanjc@google.com \
--cc=anurag.madnawat@nutanix.com \
--cc=kvm@vger.kernel.org \
--cc=manish.mishra@nutanix.com \
--cc=pbonzini@redhat.com \
--cc=shaju.abraham@nutanix.com \
--cc=shivam.kumar1@nutanix.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).